The death of wordpress

I have decided to host my own blog because its clear that the wordpress 5 disease has spread to this site. WordPress 4.x forever. This shit is just too slow and laggy to use. So this is my last post on this site.

Go to my website: http://mjsstuf.x10host.com/

Posted in Uncategorized | Leave a comment

When the standard is wrong

Ah standards, the dictatorship of a few cunts who have no real world experience defining how things should be done. Standards committees blunder forward making a huge mess that can never be fixed and we would have probably have been better off without them. At least with defacto standards the good prevails and the bad withers and dies, with formal standards the bad persists forever and nothing new can replace it without the say so of the few moldy old cunts who control said standard.

One thing which is even worse is when the standard takes something which already exists and redefines it differently, fucking up all that came before for no good reason except that in their dictatorship they are the ultimate authority and everyone else can fuck off.

Now what brought on this rant is the C function wprintf. This function was invented by Microsoft to support their UCS2 character set on windows NT. The format specifier for string was altered such that %s always matches the type of the output string, this allows the same code to be compiled as ansi or wide string facilitated by a few simple macros, the format string itself is the same in both cases.

When the c standard cunts finally woke up a decade later and decided to adopt this function, they did not take it as is, no they completely inverted the meaning of a fundamental part, the %s formatter. This made it basically 100% incompatible with windows, the system that created the fucking function. Fuck those pile of cunts,  you took a function you did not create and twisted it to serve your own purposes completely fucking up all windows code in the process.

Those responsible for this crime against all of computing deserve death by crucifixion, they really do. They had no right to redefine that function, it was already well established for a fucking decade, the change they made can not even be argued to be better, its strictly less functional than how Microsoft defined it. Fuck them I hope they die of cancer soon.

More information on the subject
https://devblogs.microsoft.com/oldnewthing/20190830-00/?p=102823

 

 

Posted in Uncategorized | Leave a comment

A quick thought on x86_64: part 1

There are some major design oversights with the x64 architecture one which is particularly painful is the lack of an absolute 64bit jump. To perform a 64bit jump on the x64 architecture one needs to use the indirect jump, this instruction and the 8 byte pointer in total comes to a nauseating 15 bytes.

It did not have to be this way. The instruction JMP ptr16:32, opcode $EA, is invalid in x64, that opcode could have been used for a absolute 64bit jump. But no its invalid, a wasted opcode, could have been extremely useful but it was not to be.

When I become leader of this world this will be corrected, all existing x64 cpus will be recalled, melted down, and then replaced with a period correct recreation the only difference being that the $EA opcode resolves to a 64bit jump. I will even have all the landfills exhumed such that not a single cpu can escape the correction. All documentation will then be altered such that no trace of the abomination can ever be found. Not a soul would be permitted to ever speak of this again on pain of torture.

Posted in Uncategorized | Leave a comment

GCC gets worse each version: part 2

So today I discovered another piece of brain damage from GCC. Seriously what the fuck is wrong with this compiler, its retardation exceeds all understanding, no human programmer could ever come up with such assembly, it would take effort to write something this bad.

This code is the consequence of overzealous and misapplied optimizations, GCC is always doing this shit and its a constant battle to deal with. My code has so much inline assembly to fix the broken optimizer. The macro “#define VARFIX(x)  asm(“” : “+r(x))“, is used extensively, it does nothing but trick the compiler into thinking the variable was changed. Its sad when hiding information from the compiler results in better code but that’s all par for the course with GCC the most retarded of all compilers.

The source code:
test code

GCC 9.3 output:
gcc 9.3 code

GCC 8.3 output:
gcc 8.3 code

 

 

 

Posted in Uncategorized | Leave a comment

C++ lambdas suck

C++ lambdas are a killer feature, its just a shame that they are too bloated to ever be used. I had so much hope for this feature, I was crushed when I saw the generated assembly. Seriously WTF, every single captured variable is done so by way of a separate pointer. You could capture everything using just a single pointer to the parent scope. What a lazy good for nothing excuse for an implementation, I am not sure if something in the C++ standard has steered the implementation to be this way but every compiler does this as far as I can tell.

Here is a comparison between the generated assembly for GCC C nested functions vs C++ lambdas, I think you should clearly be able to see why I am so angry. I don’t understand why anyone would think that this bullshit code is acceptable, it makes the the whole feature completely worthless.

Lambda test code:
lambda test

C++ Lambda assembly:
lambda-c++

GCC C Nested function equivalent:
nested function c

 

Posted in Uncategorized | Leave a comment

GCC gets worse each version: part 1

Each version of GCC is somehow worse than the last. The compiler does increasingly more retarded things each version. I have observed serious bugs which go unfixed for years, do they even test this shit?

Anyway onto today’s insanity. Current versions of GCC refuse to use XMM registers in 32bit for general purpose such as memory copy and filling. It used to work but in gcc8.1 for some inexplicable reason it stopped working. Using XMM registers for memcpy and memset is a huge improvement in both speed and code size, but for some reason apparently we can’t have that anymore in 32bits. FIX YOUR SHIT.

Posted in Uncategorized | Leave a comment

On the hidden improvements of NT6x: part 1

As a faithful user of NT5x it has become increasingly painful as much software has dropped support and requires vista or newer, hell even windows 7 support is waning now and I am still on xp/2k3. Anyway, over the years I have made various efforts to hack in newer apis such that I can run this newer software. I have never made much progress due to working on many projects at once and also the fact that I am a perfectionist. I can also never quite decided how to go about it so end up floundering around with a bunch of half implemented tests.

During my experiments I have discovered a number of ways in which NT5x is completely broken. The piece of brain damage I will discuss here today is one of several related to dynamically loaded libraries.

LoadLibrary:
When one would like to load a library at run-time, one can call this function. The specified module and all its dependencies will be loaded and a corresponding call to FreeLibrary will unload this module and any dependencies will also be unloaded if no longer needed. All nice a simple, there is just one major flaw in this system.

Export forwarders:
So a dll is able to forward an export onto another dll. This export forwarding has a number of performance advantages: 1. there needs be no thunks, the function pointer is fetched directly from the target dll. 2. The target dll is loaded on demand, if none of the forwarded functions are used then the target dll need not be loaded. There is one major problem with this mechanism however, dlls loaded by way of a forwarded export can never be unloaded, windows does not track these loads and has no way to determine when these dlls should be unloaded.

NT6x fix:
When a dll is unloaded its import table is walked and each dependent dll has its reference count decremented. Those dlls are then also unloaded if their ref-count drops to zero. This mechanism is completely incompatible with unloading of dlls loaded by forwarding so in vista a new mechanism was added to track just forwarder loaded dlls.

1. a new field was added to LDR_DATA_TABLE_ENTRY: ForwarderLinks
2. when a dll is loaded by fowarder the function
LdrpRecordForwarder(PLDR_DATA_TABLE_ENTRY LdrEntry,  PLDR_DATA_TABLE_ENTRY ldrTarget)
is called, this function inserts the target dll into the ForwarderLinks list
3. The ForwarderLinks list structure
struct ForwarderLinks_t { LIST_ENTRY list;
    PLDR_DATA_TABLE_ENTRY* target; int refCount; };
For each time the target dll is loaded LdrpRecordForwarder is called
and the ForwarderLinks list is inserted the first time, susequent times the refCount is incremented
The ref-count field is not really needed, it could have been implemented without a ref-count
But they decided to leave the existing call to LdrpLoadDll which always increments the dll ref-count
Allot of the changes made in vista feel very tacked on.
4. The LdrUnloadDll has been altered as necessary to make use of this new linked list to correctly unload this dll which where loaded by forwarded and are no-longer needed. Pretty simple fix, why on earth was this allowed to be broken like this for so long. It took them more then 10years to fix this since the first windows NT release.

 

Posted in Uncategorized | Leave a comment

Beware the page heap

So I nearly pulled out all my hair today trying to figure out the most insane and bizarre problem. I thought my OS had gotten fucked up in some way, even worried that I had some kind of infection. There was however, nothing wrong, except my stupidity.

So it began when suddenly, and for no reason I could determine a piece of software I was working on suddenly got 10 times slower. It was a development tool I have produced which I make extensive use of and being 10 times slower was instantly noticed. At first I thought there was some kind of bug introduced by changing compiler flags which recently I had done. After changing them all back and recompiling everything there was no change.

At this point I was thoroughly confused, maybe the input data had changed somehow making the tool take more time to run, but even though the tool has sloppy n² algorithms 10 times slower is still a huge change when I did not recall changing anything at all. At some point I used my sampling profiler to determine where in the tool the slowdown occurred, that place was the symbol lookup function, it uses a crappy n² search array which was taking up all the time. That function performed some 40 million string comparisons during the execution of the program, that is quite allot, but it was taking 3000ms to run which is terrible.

At this point in desperation I pulled out an old binary and tried, that. The problem still persisted. I then coppied the code to another computer and ran the current version and there was no problem. I though maybe my cpu had down-clocked or broke in some way so I ran the tool in VM on same machine and it worked fine. I restarted in safe mode and there the tool ran at normal speed. At this point I was getting worried. What the fuck had happened to my computer as to make my tool run slow. Had my OS broken in some bizarre  way as to cause slowdown to piece of code which called no function called. Did I have some kind of infection. I was about ready to reinstall my whole computer, which would have been a serious ordeal.

Before reinstalling my computer I decided to do a few more experiments, I wanted to narrow down the exact cause of the slowness. I was thinking that some sequence of instructions was fucking up the cpu or maybe there was some problem with memory or something. Maybe there where some silent exceptions being thrown somewhere. Anyway after a little more experimentation a thought entered my brain, page heap.

The fucking page heap, I had recently enabled the page heap when trying to solve a bug in my tool, it turned out the bug was not memory corruption but I though not to disable the page heap because it had never caused me issues before.  I was so wrong to leave it enabled.

What does the page heap do?
The page heap alters the behavior of the memory allocator such that every allocation is given its own page. This is horrific in terms of memory usage but makes dealing with heap corruption much easier as now writing beyond the allocation can be detected by a page fault. Its not immediately obvious but this will also have a horrific effect on performance in some situations.

The cache.
The cache is arranged in such a way that only a certain number of items can share the same lower address bits. This type of cache is called Set Associative Cache. My cpu and pretty much every modern cpu has a cache which operates in this fashion, this limitation in not too bad in the normal case. However with page heap enabled pretty much all the small allocations end up with the same lower address bits completely filling up all usable slots in the cache resulting in thrashing and the massive 10x slowdown.

So in summary, always be aware of how the page heap can fuck you up. You shall disable it as soon as it is no-longer needed. It can come back to bite you later when you have forgotten that you had enabled it. But since you have read this warning you will now know where to look if your code gets slow for no reasons, you will not have to face the terror that I had this day.

 

 

 

 

 

 

Posted in Uncategorized | Leave a comment

The gcc optimizer is retarded

So it is usually the case that I compile my code with -Os, this results in nice and compact code, however gcc tries too hard to make the code as small as possible sometimes making it very slow. I was forced to switch to -O2 because of this. However I really do not like how -O2 behaves in regards to inlining. It inlines really quite large functions, now I was mostly putting up with this but then it came to virtual functions and I was triggered.

Virtual functions and inlining:
You know what insane thing gcc did?
It first compared the value in the vtable and then if it matched the class’s own implementation it branched to the inline code, else it calls the vtable function.
Seriously WTF, inlining is meant to speed things up but now you have introduced a conditional branch and all that extra code bloat. Function calls are not that expensive, especially if you are going to try all this bullshit to avoid them. Just call the damn virtual function and stop doing retarded things.

I am really starting to get sick of writing code for gcc, its a really shit compiler that produces very ugly assembly. Luckily I maintain my own fork of gcc so this is another thing I will look into fixing. I shall create a new optimization level which is partway between -Os and -O2. This new optimization level will be such that it will avoid code which increases size but will not try to reduce code size either.

Addendum: -fno-devirtualize-speculatively option can disable this particular piece of bullshit. At least that’s one less hack I need to make to the compiler.

 

Posted in Uncategorized | Leave a comment

gnu ld is a shit linker

So today I was trying to build a module where a few functions were placed at fixed addresses, well it turns out there is absolutely no way to do such a thing with ld. Sure you can place a section at a fixed address, but that is not what I want, I want to place a particular function at a fixed address, but I want this function to be in the same section as the rest of the code. Also I need the other sections to be fit in around these fixed allocations. Well that is not possible, ld does not perform any kind of free space allocation. There is no way to even make it do what I want as far as I can tell, the way its designed is just completely incompatible even with a hack solution. So of course now I have to write my own linker.

 

Posted in Uncategorized | Leave a comment