Hex-Rays v7.3 Decompiler Comparison Page

More hexadecimal numbers in the output
bool __fastcall ge_100000001(__int64 a1) { return a1 >= 4294967297LL; } bool __fastcall ge_100000001(__int64 a1) { return a1 >= 0x100000001LL; }
When a constant looks nicer as a hexadecimal number, we print it as a hexadecimal number by default. Naturally, beauty is in the eye of the beholder, but the new beahavior will produce more readable code, and less frequently you will fell compelled to change the number representation.

By the way, this tiny change is just one of numerious improvements that we keep adding in each release. Most of them go literally unnoticed. It is just this time we decided to talk about them.
Support for variable size structures
BlockNumber = *(UINT64 *)((char *)&EfiBootRecord[1].BlockHeader.Checksum + ExtentIndex64); BlockCount = *(UINT64 *)((char *)&EfiBootRecord[1].BlockHeader.ObjectOid + ExtentIndex64); BlockNumber = EfiBootRecord->RecordExtents[ExtentIndex64].BlockNumber; BlockCount = EfiBootRecord->RecordExtents[ExtentIndex64].BlockCount;
EfiBootRecord points to a structure that has RecordExtents[0] as the last member. Such structures are considered as variable size structures in C/C++. Now we handle them nicely.
UTF-32 strings are printed inline
.rodata:0000000000000120 text "UTF-32LE", 'This is U"Hello"',0 ... v10 = std::ostream::operator<<(v9, aThisIsUHello_0); v3 = std::operator<<<std::char_traits<char>>(&std::cout, U"This is U\"Hello\"", envp);
We were printing UTF-8 and other string types, UTF-32 was not supported yet. Now we print it with the 'U' prefix.
Better argument detection for printf
int __fastcall ididi(int a1, int a2, __int64 a3, int a4, __int64 a5, int a6) { int v6; // r1 char v8; // [sp+4h] [bp-34h] int varg_r0; // [sp+28h] [bp-10h] __int64 varg_r2; // [sp+30h] [bp-8h] varg_r0 = a1; varg_r2 = a3; my_print("d=%I64d\n", a2, a3); my_print("d1=%I64d\n", v6, a5); my_print("%d-%I64d-%d-%I64d-%d\n", varg_r0, varg_r2, a4, v8, a5, a6); return 0; } int __fastcall ididi(int a1, __int64 a2, int a3, __int64 a4, int a5) { int varg_r0; // [sp+28h] [bp-10h] __int64 varg_r2; // [sp+30h] [bp-8h] varg_r0 = a1; varg_r2 = a2; my_print("d=%I64d\n", a2); my_print("d1=%I64d\n", a4); my_print("%d-%I64d-%d-%I64d-%d\n", varg_r0, varg_r2, a3, a4, a5); return 0; }
The difference between these outputs is subtle but pleasant. The new version managed to determine the variable types based on the printf format string. While the old version ended up with int a2, int a3, the new version correctly determined them as one __int64 a2.

Also, the number of varargs is determined more precisely now.
Better argument detection for scanf
scanf("8: %d%i %x%o %s%s %C%c", &v12, &v7, &v3, &v4, &v2, &v9, &v8, &v13, &v10, &v0, &v6, &v5, &v1, &v11); scanf( "8: %[ a-z]%c %2c%c %2c%2c %[ a-z]%c", &v12, &v7, &v3, &v4, &v2, &v9, &v8, &v13, &v10, &v0, &v6, &v5, &v1, &v11); scanf("8: %d%i %x%o %s%s %C%c", &v12, &v7, &v3, &v4, &v2, &v9, &v8, &v13); scanf("8: %[ a-z]%c %2c%c %2c%2c %[ a-z]%c", &v12, &v7, &v3, &v4, &v2, &v9, &v8, &v13);
A similar logic works for scanf-like functions. Please note that the old version was misdetecting the number of arguments. It was possible to correct the misdetected arguments using the Numpad-Minus hotkey but it is always better when there is less routine work on your shoulders, right?
Resolved TEB references
v15 = __readfsdword(0); v15 = NtCurrentTeb()->NtTib.ExceptionList;
While seasoned reversers know what is located at fs:0, it is still better to have it spelled out. Besides, the type of v15 is automatically detected as struct _EXCEPTION_REGISTRATION_RECORD *.
Better automatic selection of union fields
if ( *((_BYTE *)&entry->empty + 1) ) if ( entry->single.byte_count )
Again, the user can specify the union field that should be used in the output (the hotkey is Alt-Y) but there are situations when it can be automatically determined based on the access type and size. The above example illustrates this point. JFYI, the type of entry is:
union __XmStringEntryRec
  _XmStringEmptyHeader empty;
  _XmStringOptSegHdrRec single;
  _XmStringUnoptSegHdrRec unopt_single;
  _XmStringArraySegHdrRec multiple;
struct __XmStringEmptyHeader
  unsigned __int32 type : 2;
struct __XmStringOptSegHdrRec
  unsigned __int32 type : 2;
  unsigned __int32 text_type : 2;
  unsigned __int32 tag_index : 3;
  unsigned __int32 rend_begin : 1;
  unsigned __int8 byte_count;
  unsigned __int32 rend_end : 1;
  unsigned __int32 rend_index : 4;
  unsigned __int32 str_dir : 2;
  unsigned __int32 flipped : 1;
  unsigned __int32 tabs_before : 3;
  unsigned __int32 permanent : 1;
  unsigned __int32 soft_line_break : 1;
  unsigned __int32 immediate : 1;
  unsigned __int32 pad : 2;
While we can not handle bitfields yet, their presence does not prevent using other, regular fields, of the structure.
Yet one more example of union fields
void __fastcall h_generic_calc_Perm32x8(V256 *res, V256 *argL, V256 *argR) { LODWORD(res->w64[0]) = *((_DWORD *)argL->w64 + (argR->w64[0] & 7)); HIDWORD(res->w64[0]) = *((_DWORD *)argL->w64 + (HIDWORD(argR->w64[0]) & 7)); LODWORD(res->w64[1]) = *((_DWORD *)argL->w64 + (argR->w64[1] & 7)); HIDWORD(res->w64[1]) = *((_DWORD *)argL->w64 + (HIDWORD(argR->w64[1]) & 7)); LODWORD(res->w64[2]) = *((_DWORD *)argL->w64 + (argR->w64[2] & 7)); HIDWORD(res->w64[2]) = *((_DWORD *)argL->w64 + (HIDWORD(argR->w64[2]) & 7)); LODWORD(res->w64[3]) = *((_DWORD *)argL->w64 + (argR->w64[3] & 7)); HIDWORD(res->w64[3]) = *((_DWORD *)argL->w64 + (HIDWORD(argR->w64[3]) & 7)); } void __fastcall h_generic_calc_Perm32x8(V256 *res, V256 *argL, V256 *argR) { res->w32[0] = argL->w32[argR->w32[0] & 7]; res->w32[1] = argL->w32[argR->w32[1] & 7]; res->w32[2] = argL->w32[argR->w32[2] & 7]; res->w32[3] = argL->w32[argR->w32[3] & 7]; res->w32[4] = argL->w32[argR->w32[4] & 7]; res->w32[5] = argL->w32[argR->w32[5] & 7]; res->w32[6] = argL->w32[argR->w32[6] & 7]; res->w32[7] = argL->w32[argR->w32[7] & 7]; }
I could not resist the temptation to include one more example of automatic union selection. How beautiful the code on the right is!
Improved support for EABI helpers
int __cdecl main(int argc, const char **argv, const char **envp) { int v3; // r0 int v4; // r0 int v5; // r0 int v6; // r0 int v7; // r0 __int64 v8; // r0 int v9; // r2 __int64 v11; // [sp+0h] [bp-14h] int v12; // [sp+Ch] [bp-8h] int v13; // [sp+Ch] [bp-8h] v3 = _mulvsi3(7, 6, envp); v4 = _negvsi2(v3); v5 = _addvsi3(v4, 101); v12 = _subvsi3(v5, 17); printf("r = %d == 42\n", v12); v11 = _mulvdi3(7, 0, 6, 0); v6 = _negvdi2(v12, v12 >> 31); v7 = _addvdi3(v6, v6 >> 31, 101, 0); v8 = _subvdi3(v7, v7 >> 31, 17, 0); printf("r = %lld == 42\n", HIDWORD(v8), v11); v13 = _mulvsi3(0x7FFFFFFF, 0x3FFFFFFF, v9); printf("ABORT %d\n", v13); return 0; } int __cdecl main(int argc, const char **argv, const char **envp) { printf("r = %d == 42\n", 42); printf("r = %lld == 42\n", 42LL); printf("ABORT %d\n", 0x40000001); return 0; }
No comments needed, we hope. The new decompiler managed to fold constant expressions after replacing EABI helpers with corresponding operators.
Improved local variable allocation
// local variable allocation has failed, the output may be wrong! _ITM_TYPE_CF __usercall `anonymous namespace'::gl_wt_dispatch::[email protected]<edx:eax>(`anonymous namespace'::gl_wt_dispatch *const [email protected]<eax>, const _ITM_TYPE_CF *[email protected]<edx>) { int v2; // eax `anonymous namespace'::gl_wt_dispatch *const *v3; // edx `anonymous namespace'::gl_wt_dispatch *const v4; // eax const _ITM_TYPE_CF *v5; // edx _ITM_TYPE_CF result; // rax v2 = *(_DWORD *)(__readgsdword(0) + MEMORY[0x8003318]); if ( MEMORY[0x8002300] == *(_DWORD *)(v2 + 196) ) { this = *v3; } else { `anonymous namespace'::gl_wt_dispatch::validate((#42 *)v2); *(_ITM_TYPE_CF *)&this = `anonymous namespace'::gl_wt_dispatch::ITM_RfWCF(v4, v5); } LODWORD(result.real) = this; return result; } _ITM_TYPE_CF __usercall `anonymous namespace'::gl_wt_dispatch::[email protected]<edx:eax>(`anonymous namespace'::gl_wt_dispatch *const [email protected]<eax>, const _ITM_TYPE_CF *[email protected]<edx>) { int v2; // eax `anonymous namespace'::gl_wt_dispatch *const v4; // eax const _ITM_TYPE_CF *v5; // edx v2 = *(_DWORD *)(__readgsdword(0) + MEMORY[0x8003318]); if ( MEMORY[0x8002300] == *(_DWORD *)(v2 + 196) ) return *ptr; `anonymous namespace'::gl_wt_dispatch::validate((#42 *)v2); return `anonymous namespace'::gl_wt_dispatch::ITM_RfWCF(v4, v5); }
Now it works better especially in complex cases like the above.
Better recognizition of string references
sub_1135FC(-266663568, 89351520); if ( v2 > 0x48u ) { sub_108998(89351556); sub_1135FC(-266663568, "This is a long long long string"); if ( v2 > 0x48u ) { sub_108998("Another str");
In this case too, the user could set the prototype of sub_1135FC as accepting a char * and this would be enough to reveal string references in the output, but the new decompiler can do it automatically.
Better handling of structures returned by value
_BYTE v1[12]; // ax mystruct result; // 0:ax.11 ... *(_QWORD *)result.ca1 = *(_QWORD *)v1; result.s1 = *(_WORD *)&v1[8]; result.c1 = v1[10]; return result; } _BYTE v1[12]; // rax ... return *(mystruct *)v1; }
The code on the left had a very awkward sequence to copy a structure. The code on the right eliminates it as unnecessary and useless.
More while loops
do v5 = *++v4; while ( v5 ); while ( *++v4 ) ;
Do you care about this improvement? Probably you do not care because the difference is tiny. However, in additon to be simpler, the code on the right eliminated a temporary variable, v5.

A tiny improvement, but an improvement it is.
Shorter code
unsigned __int8 *__fastcall otp_memset(unsigned __int8 *pDest, unsigned __int8 val, int size) { unsigned __int8 *i; // r3 _BOOL1 v4; // cf for ( i = pDest; ; ++i ) { v4 = (unsigned int)size-- >= 1; if ( !v4 ) break; *i = val; } return pDest; } unsigned __int8 *__fastcall otp_memset(unsigned __int8 *pDest, unsigned __int8 val, int size) { unsigned __int8 *i; // r3 for ( i = pDest; (unsigned int)size-- >= 1; ++i ) *i = val; return pDest; }
Another tiny improvement made the output considerably shorter. We like it!
Improved recognition of magic divisions
__int64 __fastcall konst_mod251_shr3(unsigned __int64 a1) { unsigned __int64 v1; // rcx v1 = a1 >> 3; _RDX = v1 + ((v1 * (unsigned __int128)0x5197F7D73404147ui64) >> 64); __asm { rcr rdx, 1 } return v1 - 251 * (_RDX >> 7); } unsigned __int64 __fastcall konst_mod251_shr3(unsigned __int64 a1) { return (a1 >> 3) % 0xFB; }
This is a very special case: a division that uses the rcr instruction. Our microcode does not have the opcode for it but we implemented the logic to handle some special cases, just so you do not waste your time trying to decipher the meaning of convoluted code (yes, rcr means code that is difficult to understand).
Less gotos
__int64 __fastcall sub_0(__int64 a1, int *a2) { int v2; // eax int v3; // eax int v4; // eax v2 = *a2; if ( *a2 > 522 ) { v4 = v2 - 4143; if ( !v4 || v4 == 40950 ) goto LABEL_8; LABEL_9: return 0; } if ( v2 != 522 ) { v3 = v2 - 71; if ( v3 ) { if ( (unsigned int)(v3 - 205) >= 2 ) goto LABEL_9; } } LABEL_8: return 1; } _BOOL8 __fastcall sub_0(__int64 a1, int *a2) { int v2; // eax int v3; // eax int v4; // eax v2 = *a2; if ( *a2 > 522 ) { v4 = v2 - 4143; return !v4 || v4 == 40950; } if ( v2 != 522 ) { v3 = v2 - 71; if ( v3 ) { if ( (unsigned int)(v3 - 205) >= 2 ) return 0; } } return 1; }
Well, we can not say that we produce less gotos in all cases, but there is some improvement for sure.

Second, note that the return type got improved too: now it is immediately visible that the function returns a boolean (0/1) value.
Division may generate an exception
__int64 __fastcall sub_4008C0(int a1) { int v1; // ecx v1 = 2; if ( a1 > 2 ) { do { nanosleep(&rmtp, &rqtp); ++v1; } while ( v1 != a1 ); } return 0LL; } __int64 __fastcall sub_4008C0(int a1) { int v1; // ecx int v2; // edx int v4; // [rsp+0h] [rbp-4h] v1 = 2; if ( a1 > 2 ) { do { nanosleep(&rmtp, &rqtp); v2 = a1 % v1++; v4 = 1 / v2; } while ( v1 != a1 ); } return 0LL; }
What a surprise, the code on the right is longer and more complex! Indeed, it is so, and it is because now the decompiler is more careful with the division instructions. They potentially may generate the zero division exception and completely hiding them from the output may be misleading.

If you prefer the old behaviour, turn off the division preserving in the configuration file.
Order of variadic arguments
int __cdecl func1(const float a, int b, void *c) { return sub_88("%f, %d, %p\n", (unsigned int)b, c, a); } int __cdecl func1(const float a, int b, void *c) { return sub_88("%f, %d, %p\n", a, (unsigned int)b, c); }
Do you notice the difference? If not, here is a hint: the order of arguments of sub_88 is different. The code on the right is more correct because the the format specifiers match the variable types. For example, %f matches float a.

At the first sight the code on the left looks completely wrong but (surprise!) it works correctly on x64 machines. It is so because floating point and integer arguments are passed at different locations, so the relative order of floating/integer arguments in the call does not matter much.

Nevertheless, the code on the right causes less confusion.
Improved division recognition
int int_h_mod_m32ui64(void) { __int64 v0; // r10 v0 = h(); return (abs64(v0) & 0x1F ^ (SHIDWORD(v0) >> 31)) - (SHIDWORD(v0) >> 31); } int int_h_mod_m32ui64(void) { return h() % 32; }
This is a never ending battle, but we advance!

NOTE: these are just some selected examples that can be illustrated as side-by-side differences. There are many other improvements and new features that are not mentioned on this page. We just got tired selecting them. Some of the improvements that did not do to this page:

  • objc-related improvements
  • value range analysis can eliminate more useless code
  • better resolving of got-relative memory references
  • too big shift amounts are converted to lower values (e.g. 33->1)
  • more for-loops
  • better handling of fragemented variables
  • many other things...

This is all for the moment. Please come back for more examples!