gnu ld is a shit linker

So today I was trying to build a module where a few functions were placed at fixed addresses, well it turns out there is absolutely no way to do such a thing with ld. Sure you can place a section at a fixed address, but that is not what I want, I want to place a particular function at a fixed address, but I want this function to be in the same section as the rest of the code. Also I need the other sections to be fit in around these fixed allocations. Well that is not possible, ld does not perform any kind of free space allocation. There is no way to even make it do what I want as far as I can tell, the way its designed is just completely incompatible even with a hack solution. So of course now I have to write my own linker.

 

Posted in Uncategorized | Leave a comment

A simple solution to integer overflow in c/c++.

In c/c++ there is a major issue which is responsible for a significant proportion of all software security and stability problems. The bullshit integer model: for each operator of an expression its operands are first promoted to integer and then a common type is chosen by taking the larger of the the two operands, if the types differ by signed-ness then unsigned is selected. The resultant output is of this type also.

If for example one where to add two integers together then they would remain as integer type and the result type would be integer also. When one adds together two integers and attempts to force the result back into another integer then overflow is possible. Now one might think that this is reasonable, and it is if you perform a simple expression and assign the result back into an integer but for more complex situations then it becomes a problem.

Examples:
int average(int a, int b) { return (a + b) / 2; }
s64 add64(s32 a, s32 b) { return a + b; }
Both of these will return the wrong results in some cases due to intermediate overflow. I consider this to be total horse shit. The c/c++ integer promotion rules make this even worse due to inconsistency. Why should s32 add32(s16 a, s16 b) { return a + b; } work when the above add64 does not, total bullshit.

Bounds checking:
The most significant area where the integer model really sucks is bounds checking. When parsing data structures loaded into memory its nearly impossible to correctly check for out of bound lengths or offsets. Its so difficult that in many cases one will not even bother, nobody wants to spend thrice as long writing bound checks as actually writing useful code. Even if you managed to write functional bounds checks they look so horrifically ugly and unmaintainable it makes you wish you had not even bothered.

Now of course one cannot go changing the integer model of the language that would probably break so many things (I would still consider doing so because the existing model sucks to bad). What we need is a way to tell the compiler to stop being retarded and do the right thing, perform the intermediate computation at significant width to get the mathematically correct result. This is really not that difficult for most expressions, double width would prevent overflow in most cases, and for the case of bounds checking, just checking the carry flag would suffice.

The solution:
I propose a new piece of syntax, a special pseudo function we shall call ‘X’ for now, any code placed as the argument to ‘X’ will be computed at a precision sufficient to return the mathematically correct result. With this new language feature performing bounds checking or other things which might overflow becomes trivial.

The previous examples would simply be rewritten as.
int average(int a, int b) { return X((a + b) / 2); }
s64 add64(s32 a, s32 b) { return X(a + b); }

Is it is really so much to ask to get a feature like this, it would be more useful than anything else they come up with in the last 10 years. But of course they would never consider actually adding something useful, they just endlessly masturbate over some weird useless ‘type safety’ bullshit that has no real world use.

 

 

 

 

 

 

 

 

 

 

 

 

Performing range checks is so profoundly difficult due to intermediate overflow that most programmers get it wrong, or do not even bother.

 

 

 

 

Posted in Uncategorized | Leave a comment

Why I no longer use fossil source control.

For many years I have used fossil as my source control of choice. It was my first source control and it was a revolution for me. So many projects I have completely fucked up by not using source control. Fossil is great, it solved all of the sticking points which caused me to resist using  other source controls such as git or whatever else was popular at the time. I really like its single file repository approach, all other source control have this massive folder of bullshit. I also love that the checkout is separate from the repository, seriously that is total bullshit. What if you want to work on two revisions at once, well you can’t.

Now fossil has one major issue which is not a problem at first but overtime it becomes a bigger and bigger problem. Absolutely no way to alter history at all. Now I am not a fan of re-basing or any other kind of altering history but fossil takes it to an extreme. Fuck up your last commit? Just realized one second after you hit enter? Well you are fucked that commit is there forever. Sure you could grovel through the undocumented database file and manually undo that commit but good luck with that.

Now lets talk about git. I really quite dislike git, its kind of shit. But git works and it has lots of momentum behind it. You also have places like github and gitlab where you can upload and share your repositories. Fossil basically has nothing, its too niche. Git has a terrible user interface and has no decent gui to speak-of. Even the web interface for github is absolute trash which is barely usable compared to the perfection that is the UI that comes built-in to fossil.

But even with all the advantages of fossil, I really just cannot use it anymore, not being able to undo your last commit is just too much to deal with. I am a scatterbrain, I use source control to help me organize my work and not completely destroy my project as has happened before. But not being able to undo a mistake I made just seconds ago is bullshit.

Another nice thing about git which I have come to realize is actually a good feature is the stage. Fossil does not have such a system, commit is just a single step in which you specify what files you want to commit and it is done. I have often forget to commit files and other times accidentally committed everything even stuff that I did not want to. The stage is really helpful in making sure you commit exactly what you intended to commit, and also allowed you to do so in stages and not just a single command which you can easily fuckup.

So in summary, I am sad that I have to stop using fossil its almost perfect but its issues are to large to deal with. I shall now enter under the suffering and bondage of that horrible master that is git. Fuck git.

 

 

 

 

Posted in Uncategorized | Leave a comment

Intel is fucking shit, absolute worthless garbage, fuck them and their worthless cpus.

So I was working on some existing code written in assembly and self-modifying. I was trying to reduce the amount of self-modifying code by replacing some self-modified immediates with register values where registers where available to hold said values. To my surprise after changing a shift instruction with immediate to CL, the code got significantly slower.

Now it took me quite a while to actually discover this, I had changed the function somewhat to free up the CL register for the shift length, I at first thought it was the combination of all the changes that had added together to result in the observed performance drop but after more experimentation it became clear is was all due to the shift instruction.

WHAT THE FUCK, THE SHIFT INSTRUCTION IS FUCKING SLOW. After checking Agner Fog’s instruction table I confirmed it. On sandy bridge and later the shift instruction with length in CL takes 2 CLOCK CYCLES. What the fuck where they thinking crippling such a common instruction, this instruction is 1/4 the speed of Nehalem, this is slower than every single x86 cpu going back to the Pentium (except P4, but we don’t talk about that). Its slower then fucking Bulldozer.

Intel are fucking retards, they are incapable of making cpus with consistent and predictable performance. Why would they do this, all code written for previous sane cpus is now crippled on this fucking garbage. This is just one of the many layers of bullshit that is the Intel cpu. I won’t even start on the disaster that is AVX and the SSE mode switch issue, seriously WTF. I read that with their newer cpus its even worse, apparently the AVX penalty is now payed for every single SSE instruction executed. If the AVX-512 resisters are dirty then cpu permanently fucking downclocks because each SSE instruction causes an implicit merge back into the full register.

This bullshit has gone too far, I demand Intel be nationalized and shut down. All executives and upper management shall be flayed alive. And any engineers responsible for this retarded bullshit shall be burn at the stake. At they very least all their bullshit x86 patents should be invalidated so that some competent companies can step up and give us some cpus that arn’t fucking Shite. AMD is looking pretty good at the moment but it would be nice to have some competition.

Instruction sets should not be patent-able, nothing in the x86 instruction set is new or novel. In fact its quite the opposite, most of the instructions Intel have come up with recently have been fucking awful garbage. But since so much software is using this crap you have to live with it. I think the x86 went horribly wrong with most of SSE and its gotten so much worse since then.

On the subject of bad instruction set design, I think Intel’s first mistake was back in 1997 with the MMX instruction set. This had the potential to be incredible, we could have had general purpose 64bit integer registers. I have almost never needed to use vector processing, but having native support for 64bit integers would have been a massive improvement especially if multiply and divide instructions where provided. Aside from the lack of 64bit integer instruction MMX made another massive mistake, a mistake which in my opinion was the reason MMX failed to gain real any real traction, MMX trashes the fpu state and required a complete reset after use.

Imagine what could have been if MMX could coexist with x87 and not require that expensive EMMS reset instruction. The compiler would have casual access to 64bit registers, if MMX had included some basic 64bit instructions such as add, sub, shifts, mul and div. That would have significantly sped up much code, as it is using 64bit int on x86 is horrible, especially so if one requires multiply and divide. It would also have been possible to mix fpu and mmx simultaneously. It would have still been possible to share the same registers for both mmx and x87, just don’t mark the x87 registers as in-use when mmx is used on those registers.

 

 

 

 

 

 

 

 

 

 

Posted in Uncategorized | Leave a comment

The exe-modifier.

For many years I have worked on the insanity which is my exe-modifier. This insane piece of software is basically a linker which can insert objects into an existing executable. This allows one to make extensive changes to an existing binary and implement whole new functionality and features. This new code can be written in c/c++ and closely integrated into the existing code.

There is little documentation on how to use this software and only a handful of example projects, the source code includes two examples, neither of which use even a fraction of all the features this software has haphazardly grown over the years. If anyone actually has any interest in using this abomination I am willing to assist them into getting started with the garbage. I will write some docs in the rare case that anyone takes interest.

Example, the music calculator.
This is a fun example which reveals the power of the exe-modifier.
I have taken the standard windows calculator and modified it to implement a rotating sine-bow behind the buttons and to play music from an embedded module file.  The code for this example can be found in the exe-modifier source code as “example2”. I have always been exceedingly pleased with this example, as well as being a great example for the exe modifier, its also the only demo like thing I have ever made.

music-calculator

music_calc.exe
exe modifier release on github

The music featured in the example:
happiness_island.mod, by Bernard Sumner, 1993

 

Posted in Uncategorized | Leave a comment

Fuck the c++ standards committee, they are a bunch of fucking retards. C++ needs to be forked.

So today I was doing some coding and was greeted with this message: “reference to ‘byte’ is ambiguous”. Of course I was very confused but after a close examination of the error message I noticed this: “candidates are: ‘enum class std::byte'”

What the Jesus living fuck. You retards decided decided that you would take the very common name ‘byte’ and use it for this enum thing. I know that its part of the std:: namespace which is owned by STL but seriously, its not uncommon for using namespace std; to be found somewhere in a code base along with typedef unsigned char byte;”.

Apparently std::byte is related to some type-safe bullshit that no-one ever asked for,  the c++ standards committee keeps adding worthless features which do nothing to address the issues real programmers have and hardly ever add anything that is actually useful.

Over the yearsC++ has irritated me more and more, only a handful of changes to the standard in C++11 and newer have actually been useful, its mostly just worthless bullshit whilst the big issues are never addressed. I propose we fork the c++ standard and create a new standards committee formed of people who actually want to get work done and not just masturbate over some type-safe correctness bullshit.

 

Posted in Uncategorized | Leave a comment

GCC-9, those retards finally fixed something for once!!!

So I was just checking out the new gcc-9 release and I discovered something to my surprise. They have done something exceedingly rare, they have actually fixed one of their many insane long-standing code-gen bugs which have caused me major aggravation for years.

This particular bug was introduced in gcc 4.8 (according to compiler explorer). The specifics of this bug relates to the use of post-increment when dereferencing a pointer. The compiler in its infinite insanity would perform the increment before reading the value at the pointer address.

char ch = *p++;               // as part of a loop
incl	%ecx                  // generated assembly
movb	-1(%ecx), %al         // seriously WTF

GCC-9 appears to have finally have fixed this, the increment now happens after the memory read, as it should. How the hell did this bug last for so long without any complaints as far as I could find. Am I the only one who ever actually looks at the generated assembly.

 

 

Posted in Uncategorized | Leave a comment