I recently read an, ummm, opinionated article about Jai. I'm not going to talk much about Jai here; you can't make me talk about Jonathan Blow on Alys Plus. (And less snarkily, I"m not going to write about a language that I can't use or read the documentation of.) But one header stood out to me: " The lost era of good software." This seems like the perfect example of what I was talking about in my previous post, "A Lot of Statements about Software Speed and Quality are Just Vibes." This is one of the stronger claims in that section:
The net effect of this is that the software you’re running on your computer is effectively wiping out the last 10-20 years of hardware evolution; in some extreme cases, more like 30 years. The impact of this on things like server costs, environmental footprint, user experience, and overall outcomes for everybody is staggering.
One one level, this isn't as an egregious statement in itself or in relation to the header about an era of software quality. I mean, you need 30 years of hardware evolution to be able to wipe it away. I guess you could write software in the early 80s that wipes out 20 years of hardware evolution, but it would be unusably slow. Maybe the 80s were the lost era of good software? Certainly there was good software produced during that time. Much has been made of George R. R. Martin's use of WordStar. At the time, plenty of people liked it, so I don't think we can dismiss his fondness for it as entirely due to Martin being a curmudgeon or a weirdo. Plenty of people liked WordPerfect as well. I myself use Neovim, which owes a lot to Vim, which came out in 1991. Vim in turn owes a lot to Vi, which came out in 1976. I certainly like Vim, so I definitely believe good software was made during this time.
On the other hand, perhaps I'm overstating the extent to which people couldn't trade performance for other nice features. People still programmed in BASIC, which was much slower, yet people used it for learning and even for useful programs. People also wrote shell scripts, which I suspect were genuinely significantly slower at the time due to the cost of initiating a new process and other overheads that aren't as noticeable now.
And of course, like I said in that prior post, describing the 80s or 90s as an era of "quality" is a bit dubious. Crashes and blue screens were pretty common. I don't have hard numbers on their frequency, but it's pretty clear they're less common now, even if they should be even less frequent. Certainly there were things to like about the era: bad design trends (or bad application of design trends) hadn't undermined the UI by reducing contrast and eliminating useful affordance, software subscriptions were unheard of, and there was a lot less telemetry.
Perhaps you could make a case for the 00s. In fact, after writing all that about the 80s and 90s, I realize maybe he really does mean the 00s, as that would sort of line up with his complaints about dynamic languages, which were taking off in a big way then. Reiterating that earlier post, it had the benefit of technologies that made operating systems much more stable and a reasonably healthy web while mostly predating a lot of trends people rightly criticize. However, there were certainly people complaining about programs being too large and complicated then. Java essentially played the role of Electron, and similarly to critics of Electron today, I think critics of Java had a mix of good and bad points. I guess that could be named an era of software quality, but it seems like a relative term at best.
I also can't escape the sense that the "era" wasn't some sort of stable point that was disrupted or a uniformly golden kingdom ruined by programmers losing the old ways but was just kind of a fortuitous coinciding of different trends. And although I'm trying my best to broaden the view and get beyond whatever nostalgia or other biases, this all feels pretty imprecise.
But I'd really object to this is that this isn't really true of current software on my computer writ large. Glancing at the programs I happen to have open right now in my sidebar, I think the only one that feels clearly like it's "wiping out the last 10-20 years of hardware evolution" is Teams, and honestly part of that might be less its actual performance than my other frustrations with it. Firefox is also a contender, although that gets into the question of how much to blame the browser rather than the webpages in the browser. I guess there's something to claiming the web wipes out years of hardware evolution, regardless of how you assign blame—certainly there are trivial websites that bog down computers and websites that would benefit from a lot less JavaScript. I should also note that while my current slate of apps are using more memory than I'd prefer (roughly 10 to 11 gigabytes as I've sporadically checked free -h while writing), they're pretty light on the CPU, my computer's not actually swapping, and I don't think the programs are doing anything crazy with disk I/O either. I might use more desktop apps and fewer Electron apps than the average person in 2025, but none of these example applications I use have stopped getting updates. I'm not running an all-suckless userspace or a bunch of abandonware or anything.
Curiously, the author isn't too negative about dynamic languages per se, especially Python, but does seem to be pretty negative about dynamic language programmers. If I had to guess, he seems to treat the languages as good tools for their niches, which is a healthy attitude. His statements about the programmers feel pretty disparaging, which is not a healthy attitude. E.g.,
Suddenly programmers were being encouraged to pretend they knew nothing about the hardware their program was destined to run on. Just assume that resources are abundant, that their allocation and deallocation was practically free, and if things started getting sluggish they could just ask management for more servers.
I've definitely seen people suggest a simplistic version of "hardware is cheaper than programmers, don't worry about it!" And similarly, people sometimes present Donald Knuth's adage about premature optimization as though optimization should always happen at the very end or that is worthless 99 percent of the time. But honestly, for some problems, this not-great advice is...actually okay. If you're banging out CRUD routes that hit the database a handful of times, especially if they're not foundational things the user must do (like log in) or the user does frequently, your optimization is likely to be premature and not worth the programmer cost.
My thresholds might not be quite right—I don't have the magic formula for how much you should care about performance. Certainly I've messed this up and written code that could have been faster or less resource-intensive. My point is that we shouldn't treat exhortations to care a little bit less about performance or tools that let you think a little less about hardware as tantamount to advocating for total ignorance.
I mean, people didn't say "Python is fast enough" because they're dumbasses or want to advocate ignorance. They said it because Python is fast enough for a lot of things. I'm sure plenty of people did give versions of this advice without sufficient context or nuance, as people caught up in hype cycles tend to do. Certainly that's a valid criticism of someone like Bob Martin, whose writing, as far as I have read, is a mixture of straight-up bad advice and good advice ruined by being canonized into inflexible rules.
It's fine to vent, of course, and there's plenty to vent about in computing today. But I think if your language advocacy is so rooted in a negative view, especially if the negativity seems overbroad or exaggerated, the community you will build around the language risks not being much fun to be in.
I don't know if I would call myself a "normie," but I do relate to wanting to be seen as just a woman, and I also have a relatively conforming gender presentation1. I'm also white and grew up middle class. So I feel like I could at least theoretically do well under a bargain where binary trans people are allowed to transition, as long as we're fairly quiet about it, and protected against discrimination as long as we keep our end of the bargain. Sort of like Don't Ask, Don't Tell for transness, but marginally friendlier and applied across society. Nonetheless, I think this strategy sucks.
There's a strong moral argument against this, of course, and this argument is the most compelling. I happen to be trans in a way that I could theoretically assimilate or could even go entirely stealth, but what about people whose gender identity doesn't fit neatly into those categories? Why is my identity more valid than theirs? Why be satisfied with half measures?
One response to this is more reactionary: Being a "binary transsexual" is a real, scientifically valid phenomenon and being nonbinary isn't. This is bullshit, and I'm not sure I can convince these people otherwise, although frankly the practical argument should still convince them to shut up and support nonbinary people, if only for selfish reasons.
Another response is that it's easier to make strategic gains by starting with trans rights that are relatively acceptable. Sometimes this reaction is thinly veiled reactionary thinking, but some people genuinely believe this in good faith. Some people mix these together—I'd put Brianna Wu in that camp.
This argument doesn't hold up, however.
First, there's not a constituency for this view. If you look at a snapshot of public polling at any one time, you'll definitely find some people who support trans people on some issues but not others. But short of a handful of pickme trans people, trans people who've accepted their own transness but not others', trans people who've mistakenly decided this is the only way forward, and a handful of cis people who really care about trans women in sports for whatever reason, I don't think this group really cares that much. Obviously, sometimes your best option is a compromise that no one really likes, that's the definition of compromise, blah blah blah, but the compromise itself is not going to rally people to your side.
Take healthcare as an example. While people got excited about Barack Obama, the candidate, they didn't really get excited about Affordable Care Act. In contrast, people rallied around both Bernie Sanders and Medicare for All2. This isn't to downplay what Obamacare achieved. And if Medicare for All is ever passed, it will be less ambitious than the most exuberant slogans and wildest dreams3. But starting by watering it down is going to kill people's motivation. An expansive view of trans rights is going to get trans people excited in a way that "here's a compromise because our backs are against the wall" is not.
Second, even if accepting a compromise is what you eventually need to do, as a matter of negotiation strategy, your campaign shouldn't be for the compromise. If you want to build a bunch of high-speed rail, your spokespeople shouldn't be adding "and we'd accept regular rail too" to every quote they give to the press. If you don't want Medicare to be cut, your slogan shouldn't be "No cuts to Medicare! Although small cuts are not that bad; we'd agree to those if backed into a corner". That doesn't always mean you should push for the maximalist version of your vision 100 percent of the time, but you also don't want to unnecessarily cede ground. Unfortunately, minority groups' completely reasonable requests often get portrayed as unreasonable. Fortunately, activists have some leeway to be unreasonable. They should use it!
Maybe somewhere in the vast multiverse of possible political realities there exists a society where "binary" trans people could maintain a partial, but stable swath of rights by excluding nonbinary people, although I doubt it. But currently, we're in a country where cis gay people and cis straight women are also seeing their rights threatened (not to mention immigrants, disabled people, and people of color). It's not like there's a bunch of conservatives eager to protect a subset of trans rights if only the Democrats become 50 percent more transphobic. Sure, Trump voters who don't like trans women in women's sports but also oppose anti-trans discrimination exist, but they probably voted for Trump because they thought he would lower inflation.
Third, trans people benefit from a healthy, vibrant, and diverse queer community. Selecting some people as normal enough to be the face of the movement creates artificial divisions and makes it harder for these spaces to thrive. We all miss out on their community, their jokes, their wisdom, their art, their performance, their media, their selves.
Fourth, conforming in a conformist society is bad for you even if you would do basically the same thing anyway. Even if you never run afoul of societal conventions, the worry about it is a kind of a tax on you at all times. And there's the obvious risk that the bounds will tighten further and you'll no longer be safe.
Fifth, especially for younger trans people or those who have only "figured it out" recently, you may not actually know your identity neatly conforms to the current bounds. Maybe instead of being 100 percent "binary" transgender who always uses one set of pronouns, you want to experiment with other presentations or other sets of pronouns. Or maybe your next partner is nonbinary. Maybe your child is. Maybe your childhood best friend comes out as nonbinary.
Conformity feels a bit like a flood. Even if you live on a big house with a generous garden on a hill above the current high water mark, it will still circumscribe your life, hurt your neighbors, and bring your community to a halt. And there's always the worry the floodwaters will keep rising. Better to keep the flood out of the city.
Except for the part where I was assigned male at birth!↩
To be clear, I'm not saying more people like Medicare for All than the Affordable Care Act. I'm saying more people really like Medicare for All. A few Kaiser Family Foundation polls—see this article about Medicare for All and this interactive about ACA—I dug up suggest they've had similar popularity between 2017 and 2020. I also found a poll suggesting ACA is much more popular than Medicare for All, although I think that's at least in part because the question wording made Medicare for All seem more disruptive than it actually would be.↩
Sometimes I read "And now it's all this," a blog where the author, Dr. Drang, posts about his random scripting and automation workflows, as well as various math and physics puzzles. Earlier this month, he revisited a math problem: if eight people are sitting around a table, and they all flip coins, what are the odds no two adjacent people have flipped heads?
This solution is actually pretty similar to his, although I didn't realize there are a couple of tricks that make it much shorter:
Since the first and eighth person are actually next to each other, you can actually just tack on the first person at the end of the list. Instead I wrote a bunch of code to loop around
since we're looking for adjacent heads, you can make the list of heads and tails a string and use in to search for "HH".
For eight people, there are 2^8 possibilities, so checking every possibility is totally doable by a computer. (Honestly, a human could do it in an hour or two, I bet.)
As a personal challenge (and because I needed something to distract myself from being sick and in bed in early February), I came up with another solution using Monte Carlo, i.e., running a bunch of simulations. This is, surprisingly, slower and less accurate for eight people. However, it does scale up relatively well. Since the number of possibilities doubles every single time you add a person, the exhaustive approach grows exponentially. Checking all the possibilities for, say, 100 people flipping coins on a single computer, even if extremely optimized, would take longer than the universe has been existed1. In contrast, the Monte Carlo approach grows linearly, since you have to do more coin flips as the number of people increases.
Lastly, I realized his original post provided an even faster way. Fibonacci numbers have a cousin, Lucas numbers, and one of his readers shared that it turns out, these exactly correspond to the number of combinations that don't result in. So the Lucas number 8 is 47. 256 - 47 = 209, which is exactly equal to the numbers we got before. It's easy enough to calculate Lucas numbers iteratively, and this is much faster than the Monte Carlo approach or the exhaustive approach.
However, you may realize that this suggests an even faster way: using a closed-form solution to the Lucas numbers. Using the golden ratio, we can calculate this easily, although I had to round it to give 47 and not a number a tiny bit less than 47. Calculating closed-form solutions to Lucas and Fibonacci numbers are a bit weird as a result of the fact we're taking the square root that doesn't result in an integer number (and is irrational, I'm pretty sure) . I'm sure you could use a computer algebra system (or a library). I imagine there may be some way to rearrange the formula to minimize the error from doing square roots on floating point numbers, but I haven't tried it yet. The decimal library might also help.
Because exponentiation isn't a constant-time operation on computers, I don't think the algorithm is strictly constant time. The closed form is still faster in Python, although the gap would probably be smaller if Python had less overhead in various operations. I thought exponentiation would be log2 N but no less a source than Raymond Hettinger says it's "nearly constant time." My quick tests indicate exponents run in about 8 nanoseconds up until they exceed the threshold of what fits in a long ( 2^64-1), and then it takes hundreds of nanoseconds and starts to increase sublinearly. Other tests I ran resulted in different numbers, so I wouldn't bet on those exact number, but it definitely seems to be in the nanosecond range, and thus not that much slower than other operations.
Iteratively finding a Lucas number is linear, but with a much smaller coefficient than Monte Carlo approach.
The Monte Carlo approach grows linearly. In practice there is a significant (but not prohibitive) coefficient.
Using Lucas numbers by finding them iteratively is linear with a pretty small coefficient.
Using Lucas numbers using a closed form is briefly constant time, then sublinear with a pretty small coefficient.
You can also find Lucas numbers recursively, which has its own opportunities for optimization. Maybe I'll do a follow-up and discuss those options.
This is all kind of irrelevant (I mean, besides the fact that it's a puzzle). Other than the exhaustive approach, these are all plenty fast up to pretty big numbers. And the odds increase very rapidly, so that the adjacent people somewhere in your increasingly large ring of people both flipping heads becomes ever closer to 100%.
The 13.7 billion years it's really existed, not the paltry 6,000 years young-Earth creationists claim.↩
I got a very late start on Doscember, but I had some downtime over the holidays so I decided to play around with DOS, with an eye to maybe make video games in it eventually. I decided to program in Pascal as it has some interesting characteristics, was a mainstream language for DOS, has a famously good implementation in Turbo Pascal, and is slightly higher level than C.
I haven't written a ton of Pascal yet, but these are my thoughts so far. I have a pretty positive impression overall!
Some people debate the merits of recent compilers that disallow unused variables or require certain capitalization schemes. Pascal is honestly much more persnickety. All variables have to be defined up front. Similar to variables, all constants need to be defined up front, in a separate section from the variables. This is often encouraged to this day, with some exceptions, and Lisps in particular end up doing encouraging this structure as well, with their let construct. What makes Pascal's format less convenient is that you can't do many operations in the initialization. You can do math and logical operations, but little else, meaning an expression like var x: integer = my_function(5); is not allowed as far as I've been able to tell.
It especially becomes a drag when you have to define loop variables up front. Not only is defining a variable for a loop at the start or right before a loop very very common, it's common for a good reason—loop variables tend to be used for that loop and nowhere else. I think if I were designing a Pascal implementation, I would allow defining variables in loops. I believe Delphi does this.
It also becomes a drag if you're doing a series of manipulations and want to use variables for each step (e.g., processed_entries, processed_entries_no_duplicates), which I sometimes do when working with data. (Of course, data analysis isn't really a strength of Pascal's for other reasons, including its lack of libraries for basic data process, statistics, machine learning, and visualization.). I actually just read an article encouraging people to avoid these kinds of names, so perhaps discouraging naming intermediate values is not so out of step from modern practice. And this kind of code ends up being kind of throwaway code anyway, so it's really not a surprise that it doesn't fit well in a language intended to impose structure.
There's also no type inference. Given the level of IDE and editor tooling and the level of computing resources when Pascal was originally invented or even when Borland and others were extending the language, this decision was probably correct, but it definitely makes it less ergonomic than many languages in wide use today.
So while I find both the required variable section and lack of inference annoying, it's not a dealbreaker for what I want to use it for. Now, if I were writing large programs in in every day, it might start to grate on me.
Pascal is an interesting mix of high and low level. It has pointers and the data types are pretty clearly matched to exact memory sizes; however, it also has features like sets, tagged unions (which it calls "records with variant parts"), and subrange types (e.g., you can define a MonthNumber type that's limited from 1 to 12). Turbo Pascal eventually added support for object-oriented programming. Free Pascal has also added for each loops ("for in" loops) and generics. So in some ways it feels similar to the "better C" languages, like Zig or Odin, although admittedly I haven't personally tried those out yet.
As an aside, subrange types are kind of interesting. Pascal has them, Ada has them, but very few other languages have them? In fact, Oberon, the language Nicholas Wirth created after Pascal and Modula-2, doesn't have them. In many languages without them, you can do something similar by controlling object instantiation or by controlling struct creation in Rust. For example, although Python stores months as integers, datetime.datetime prevents you from assigning 13 as the month. I believe you can turn C structs into opaque pointers and fake something similar as well, although the protections aren't as strong.
Despite both Pascal and Ada having a notion of "subranges," they work pretty differently. In Ada, my understanding is that types favor the semantic and the idea is that the compiler will choose the appropriate physical size. Pascal allows for, say, giving an existing type a new name to prevent conflation, but they're pretty tied to hardware types. Even the documentation on sets emphasizes its memory usage. That's probably a pretty good tradeoff for the 1970s where you had to be mindful about memory usage for any substantial program, yet also wanted some affordances to express semantics . Originally I thought this combination had lost its luster, but upon more reflection1:, I realized Rust and other modern systems languages will sometimes straddle the line, so it may be even more popular today. You can also end up with a mixture of physical and logical type features with C# and probably will be able to in Java once they finish adding unboxed primitives. I wouldn't call these strictly systems languages, but they are kind of systems-adjacent.
Pascal shows its age in other ways as well. Its identifiers are case insensitive. This isn't a significant problem since you can mix case in your source files and the errors will print identifiers in their original casing. It does prevent you from using the Date date = new Date() convention you sometimes see in object-oriented programming, though.
So far I have found the documentation tough to navigate around, similar to how I find Javadocs difficult to navigate around. They do a good job of organizing all the information consistently and documenting all the types, methods, etc., but discoverability tends to be poor—it's hard to know where to get started out of this large mess of common tools. It's also tough to find things that work together. This varies based on the unit (what Pascal calls its modules). For units that are small or standard, like the Math unit, this isn't a big deal. Your intuitions that the rounding functions are probably named round, ceil, floor or something like that will probably get you to the right place. Although if you only look for "round", you'll find RoundTo, but might miss the others and still be at a loss for how to round an floating point number to an integer. This might actually be the hardest part for me going forward.
Also like Java, there's quite a few things kept around for backwards compatibility that don't seem to be a good idea to try to use today. However, with Java, I feel like there's still some expectation that the deprecated classes still work, which seemingly is less true for Pascal, at least if you use Free Pascal. In fairness, while there's not a standard "deprecated" tag or "compatibility warning" graphic, there does seem to be some effort to clarify the current status and steer users toward what's actually maintained. For example, the `Graph`` unit says:
The unit is provided for compatibility only: It is recommended to use more modern graphical systems. The graph unit will allow to recompile old programs. They will work to some extent, but if the application has heavy graphical needs, it's recommended to use another set of graphical routines, suited to the platform the program should work on.
My hope is that if I follow their advice, I won't get burned on compatibility issues. Although, given part of the reason I'm learning Pascal is for retrocomputer adventures, I probably will experiment with some of the older stuff and push the boundaries. At least I'll know what I'm signing up for.
One strength of Pascal's you'll sometimes hear about is Delphi, which is kind of like an alternate-timeline form of VisualBasic (pre-.NET). It also has an easy form builder. You can still buy these today from Embarcadero. or you can use the open source equivalent, Lazarus.
Another strength is that it compiles fast. This is less impressive today than when Borland was bragging about Turbo Pascal's immense speed, but of course, those hardware advancements have not made all compilation instantaneous. I'm pretty used to writing in languages that don't really require noticeable compilation, so the quick turnaround makes me feel at home and makes me miss the lack of a REPL less.
Overall, I don't see Pascal dethroning any of the languages I use most or like the most—Python, Clojure, C#, and Fennel—, in large part because of the missing ergonomic features and unfriendly documentation, but I imagine I will use it for future retrocomputing adventures. Like my other recent foray into an a programming language of yesteryear, Prolog, I'm certainly glad I learned it.
I guess this is a pun, although I tend to think of reflection as being squarely on the higher-level side. Although I suppose you could theoretically use a system to analyze your memory usage at runtime and possibly optimize it, almost like building in a JIT compiler into your program.↩
If you dive into the YouTube player, there's an option called "Stats for Nerds" that gives you way more information than you need, plus a few things that are probably useful for troubleshooting, like buffer health and the number of dropped frames1. I thought it would be funny to create this, but for blog posts.
Right now, it includes the number of words, the number of sentences, the average words per sentence, the Flesch-Kincaid reading ease score, and the Dale-Chall reading ease score. I'll probably add more, eventually.
I've dived into creating plugins for Lektor with this blog, and they're less complex than I feared. I don't remember why exactly I shied away. They're written in Python so that part wouldn't have turned me off, but I think I may have overestimated the amount of overhead required to do it. Anyway, having made two plugins, they're pretty well-documented and you can always read through other plugins for help. Anyway, calculating these stats was really easy because I was able to pull in NLTK, BeautifulSoup (for removing the HTML from the blog post), and py-readability-metrics.
For future reference, here's what they look like right now:
They're not on every post yet, but I'm sure I'll either do that soon or get distracted and do something else.
One thing that's fun about this blog that I didn't have on Cohost is the ability to make changes across all my posts like this one. Plus, adding stats generated based on counting or analyzing the contents would have also been nigh-impossible using CSS or involving some sort of separate script to generate them and then paste them into the post body. I'd like to do more of this, although I don't want to make the pages too crowded.
Realistically, these are all probably meant to be used for for troubleshooting, but things like the codec versions and the "Mystery Text" are either going to come up in fairly niche contexts (e.g., if you happen to know a version of a codec works poorly on your system) or are only meaningful to YouTube's own software developers and system administrators.↩
It's common to hear that software today is in a dire state. I'm not going to disagree—when I came back to my computer to add to this post, Firefox had stopped accepting keyboard input, with keystrokes appearing on screen sometimes 30 seconds or more after and sometimes not at all. However, I do think it's hard to nail down whether it's gotten worse.
One semi-famous rant in this space is Casey Muratoni's complaining about Visual Studio slowing down over time, particularly in its watch window, which apparently no longer lets you rapidly step through. (You can see the actual demo here) While I don't doubt that it used to be faster, I'm not sure how representative Visual Studio is.
Often the most rigorous measurements, e.g., Nicholas Nethercote's "How to Speed Up the Rust Compiler in" posts (here's the latest), show things getting faster. Obviously, this is just an effect of the fact that the people paying closest attention to performance are often optimizing their own performance and also that if your performance optimization efforts fail, you're probably not going to post your measurements. No company or open source maintainer or solo dev or hobbyist is going to publish "My Software Got Really Slow between 2020 to 2024 Because I was Asleep at the Switch". Still, it is kind of funny how divergent vibes and data are here.
Some of the most rigorous and broad measurements showing things have gotten slower that I'm aware of are in the form of webpage sizes. Obviously size ≠ speed, but the size increase is drastic enough to offset a lot of improvements in other areas.The HTTP Almanac showed a nearly 600 percent increase in size from June 2012 to June 2022 for mobile pages and a 221 percent increase for desktop pages. Their 2024 report shows the average weight of pages continues to increase. This is genuinely a problem!
Another set of pretty rigorous measurements is Dan Luu's measurements of latency, such as "input lag", written in 2017. He's cautiously optimistic:
On the bright side, we’re arguably emerging from the latency dark ages and it’s now possible to assemble a computer or buy a tablet with latency that’s in the same range as you could get off-the-shelf in the 70s and 80s.
So there are some areas where there are genuine problems, but a more mixed picture overall. I think one could fairly ask why performance is dipping at all, given the enormously faster computers we have. Even if the end result is that software is, say, only 5 percent slower overall, why do we have to tolerate even that, when hardware is so much faster? Still, "software is much slower than it could be" is quite a bit different than "software is much slower than before."
Quality is even harder to nail down.
Having done some retrocomputing lately and a lot of reading about computers and software of the 80s and 90s, it is a bit funny to think of software in the 80s or 90s as being better than today. MS-DOS was very prone to crashing and while Windows 95 and 98 made improvements, they weren't known for their stability either. I know little about Apple computers of that era, but my impression is that the later incarnations of the Classic Mac OS operating system were showing their age and also pretty buggy. Offhand, the only stable operating system of the 90s by reputation I can think of is probably Windows 2000. That made it barely under the wire to count as "90s," being released December 15, 19991, but NT 4 probably qualifies as well.
Realistically, I'm sure there were other solid OSes then. I haven't heard good things about desktop Linux in the 90s2, with the exception of Neal Stephenson's "In the Beginning Was...The Command Line," who, via analogy, dubiously claims Linux computers never crash. While I find that very unlikely, that was when the LAMP stack started to gain popularity and Apache was popular not long after its 1995 release, so I'd guess Linux servers at least were reasonably stable. Apparently Red Hat already had its IPO and reached a market cap of $5 billion in 1999. Presumably commercial Unix OSes and IBM's mainframe offerings were competitive with Windows NT and Linux then, but I don't know much about them.
I don't know a lot about BeOS, OS/2, or NeXT (the last of which became a very well regarded operating system), so it's possible I'm short-changing those. I've heard good things about BeOS and macOS of the time, but those were for their user friendliness. I'm kinda glossing over that, but it is important. I mean, a completely bug-free implementation of ed that runs at lightning speed wouldn't be very useful to most users today. I think you could make a strong argument that flat design and being able to A/B test and constantly tweak stuff were really bad for usability, and software usability has genuinely gone downhill.
At risk of short-changing some operating systems, it seems that, in the 90s, your choices for stable operating systems was one that you had to pay Microsoft for (which wasn't even as good for playing games) or a few obscure OSes you had to pay IBM and a couple others a lot for and were mostly meant for servers and mainframes. The other operating systems had their own merits, to be clear, but I'm talking about operating systems that had some stability features that we take for granted nowadays.3
Still, maybe software peaked in the 00s instead of the 90s and has since stagnated generally, or has actually gotten significantly worse. The 00s was an era where everyone had the benefit of protected memory, meaning programs couldn't fuck each other up (or fuck up the OS itself) anymore (Well, mostly) and other problems, like malware and buggy drivers, had mostly been reigned in. And perhaps that's early enough that the pernicious trends that Ruined Everything hadn't really taken hold.
Well, maybe! Like I said, it's tough to nail all this down. Webpages were certainly smaller then, and the trend of having tons of tracking scripts on pages hadn't really emerged yet. So "webapps were better in the late 00's/early 10's" seems much more grounded and plausible to me than "software in general was better in the late 00's/early 10's."
Mobile apps are kinda interesting in light of this. I'm genuinely pretty content with the quality of most of my phone apps on iOS. I'm not sure they crash less than desktop apps, but they start really quickly and tend to be better at picking up where they left off. Of course, I have the benefit of good 5G coverage where I live, so I'm not as affected by cases where the programmers assumed people would have reliable internet 24/7. And while I don't have the latest Apple phone (and never have had the latest smartphone), I do have a new-ish iPhone (albeit an SE), so I'm definitely not as strapped for processing power as many people.
The other thing to think about is that people were complaining about this for a long time. Nicholas Wirth complained about this in 1995. I also found people discussing a similar idea in 2009, although note the number of skeptical comments.
While I'm not out to prevent anyone from venting, I do think if we actually want to make things better, we should be more grounded and step away from the nostalgia and hyperbole. There are valuable techniques and tools that from today, there are valuable techniques and tools we could resurrect, there are valuable techniques and tools still to be invented. We should try to draw from all categories if we want to make software better for our users.
although i'm sure people had a lot of fun tinkering.↩
Similarly, you could say that in the early 90s your only choice for multitasking on an IBM compatible was Windows 3.1 and OS/2 or in the 80s your only choice for a solid GUI was a Mac. Obviously, "supports a GUI" and "can multitask" are completely trivial, things you expect out of a Raspberry Pi.↩
Deepseek's release has thrown big tech into a tizzy, as they realize they have even less of a competitve advantage (a "moat" in business-speak) than they thought, producing some excellent sources of schadenfreude in the process, such as "OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us".
Dario Amodei, the CEO of Anthropic, wrote a kinda weird post on his personal website. He argues one, DeepSeek is actually behind where you'd expect a model to be today and not that much cheaper than, just picking one out of his hat, Anthropic's models, but, two, there's an "existential risk" from China having access to GPUs, necessitating export controls.
Now, it's possible that he's surveyed the market and contemplated the politics of this all, and thus genuinely believes an authoritarian country having access to AI or somehow controlling the market for it would be really bad (although I have some bad news for him regarding the current direction of the United States), apparently because of the military advantage it would give them1. Still, part of me believes a different story.
Wouldn't it be convenient for the head of a AI company if a new competitor's new product was both overhyped and there to be some sort of military threat to them continuing to access to the necessary chips? Wouldn't that narrative play well in Washington, given an administration obsessed with trade, Washington's pre-existing worries about China, and the perpetual willingness to do things in the name of national security? And wouldn't downplaying the magnitude simultaneously help preserve his own company's image?
He also argues that "Making AI that is smarter than almost all humans at almost all things will require millions of chips, tens of billions of dollars (at least), and is most likely to happen in 2026-2027." To me, this is pretty deranged. I don't know that progress has slowed down entirely, but I'm not sure we've seen any GPT-3 to GPT-4 leaps. I guess we'll find out!
Another funny thing about this is that DeepSeek is more open than Anthropic, which comes up in this post because Amodei doesn't disclose how much it actually cost to train Anthropic's models, so when he talks about the relative cost of Anthropic's to DeepSeek's, he has to kind of hand-wave it. This isn't necessarily hypocritical, although it seems to slightly undermine his point about military advantage. Obviously DeepSeek or other Chinese labs could close up their models at any time—a move pioneered by OpenAI—but currently U.S. researchers (and the U.S. military) have basically the same access as Chinese researchers (and the Chinese military). Arguably, the U.S. has better access because we have more access to the necessary hardware, as far as I can tell. We certainly have the capital.
He does at least say he wants China to benefit in non-military areas:
To be clear, the goal here is not to deny China or any other authoritarian country the immense benefits in science, medicine, quality of life, etc that come from very powerful AI systems. Everyone should be able to benefit from AI. The goal is to prevent them from gaining military dominance.
This is nice, I guess, although it seems like export controls would also undermine this goal of letting those countries access the "immense benefits" he foresees. Maybe his ideal is that Chinese citizens and businesses have minimal access to models directly and only access them through the APIs of, picking a name at random, Anthropic.
This is all a bit weird for me to analyze because, frankly, I'm not sure there are any great AI benchmarks, and I'm dubious of the value of AI generally, so talk of which models are "better" feels a bit woolly to me. The fact that they are built on grand scale plagiarism and are resource-intensive doesn't help.
The IDF has used an AI system called, ghoulishly, the Gospel in order to select bombing targets. However, this appears to be based on machine learning, not LLMs or other generative AI.↩
As a follow up to (Trying) to Add Stat Tracking, I checked again and it seems to be working. So either it was working but hadn't received data yet or restarting something on my server got it to work again. 🤷♀️
Programming in the 21st Century is a blog by "recovering programmer" James Hague covering a variety of topics, including functional programming, functional game programming, and how programming differs in, well, the 21st century. I revisited it lately because I was doing Doscember, and the author frequently talks about that era, so I thought it was good to have some perspective before doing retrocomputing.
One thing I really appreciate is that the author seems to strike a good balance between leveraging the perspective of someone who was around for the 1980s era of computing, when there were tight limits on computer resources, while appreciating what's available now and what it lets you accomplish. I have plenty of frustrations with modern software, including its performance, but I'm nonetheless frustrated by simplistic takes dunking on, say, using a garbage collected language1 or "today's programmers." Whinging is understandable but these specific tropes are counterproductive.
Worse, some of these discussions devolve into fetishizing efficiency or low-level programming. There's nothing wrong with an intense focus on optimization in contexts where that's necessary or even enjoying it as a challenge or learning exercise—what I object to is acting like an obsession with performance or low-level programming is necessary to be a good programmer or necessary for good programs.
I've actually been playing around with Turbo Pascal specifically and posts like Things Turbo Pascal is Smaller Than. Reading the comments on lobste.rs, it seems like people interpret it as trying to criticize the items on the list? I don't interpret it that way, frankly. I see it more as a statement of what could be done in that amount of space and the constraints that made it necessary. In that way, I definitely appreciate what Turbo Pascal, a ~40 KB binary, can do. But the idea that larger programs are some sort of hugely superfluous waste of space is missing the overall point.
In actually using Turbo Pascal, I also notice what it can't do and the tradeoffs it made. Turbo Pascal 3.5 shipped the text of the error messages as a separate file in order to allow programmers to delete it and save 1.5 KB of disk space and memory. It was good that they put the work in to do that, but thank god we don't need to do that anymore. Turbo Pascal 5.5, which I've been using, embeds the messages directly, but they're still pretty sparse. I guess it's not completely out of the question that they could cram in the kinds of detailed messages present in Rust (or are being increasingly added to Python) and the accompanying logic in a binary roughly the size of Turbo Pascal 5.5's. I suspect the design choices that make Pascal quick to parse probably would give such a project a fighting chance. However, it would be tough.
You also can see the limits of this era even with clever optimizations and plenty of implementation drudgery in posts like A Spell Checker Used to be a Major Feat of Engineering. He points out that /usr/share/dict/words is 2 MB. Some systems didn't even have that much total memory! It's not enough to just drop the uncommon words, either. Thus, rather than spell checking being something you just drop in or get for free from your browser or OS, it's the major feat of engineering from the title and takes time away from other features.
There's actually a page on the Free Pascal wiki, Size Matters, that suggests to me this fetishization can become truly unsparing. Free Pascal seems to produce binaries that are consistently larger than Turbo Pascal did2, but these are still a reasonable size, in the hundreds of kilobytes or handful of megabytes. To pick an example, I'm pretty well-versed in, compiling Clojure ahead of time into a static binary frequently creates larger binaries, in the 10s of MBs. Looking at binaries installed on my computer, this is on the large size, but not extraordinary. If Free Pascal users are complaining about much smaller binaries, like the wiki article suggests, they're likely either working in specialized situations or not being realistic.
As I've been working on this and dropping in the titles, I realized how clear and appealing they are. Programming in the 21st Century may sound like a throwaway title until you realize that when he started to program, the 21st century was legitimately far away. With the pace of technological development we are in a different era. (You could probably make the case for several eras passing between then and now, in fact.)
Overall, there's a focus on the end product. While I enjoy retrocomputing, programming language theory, and a bunch of other technology rabbitholes, I do want to make things that are creative, useful, or even both. I see Programming in the 21st Century as a useful reminder to do just that.
The blog finished up in 2017, with Hague saying he said all he meant to. While I'm sad to see it wrap up, it ended on a high note. I wish Hague success in whatever games or programs or non-tech projects he's doing now.
I'm not talking about specific claims either, like "garbage collection causes too much latency for demanding games" even if I don't fully agree with them. I'm talking about claims that garbage collection is generally a sign of laziness or why programs today are so slow.↩
I'm not just making this comparison because I've been using Turbo Pascal, it actually comes up in the wiki.↩
If you're familiar with what Brianna Wu has been up to since Gamergate, you're probably at least vaguely aware of the way she sucks. If you're familiar with Brianna Wu only up to Gamergate, well, buckle up.
⚠️ Content Note: transphobia
Brianna Wu is a transgender woman who, regrettably, has a platform on Twitter. I've been pretty negative about her and her opinions on trans people, politics, and trans politics and have bluntly urged people to not, under any circumstances, follow her advice on transitioning.
Recently she posted her views in a way that concisely illustrates how feckless the trans politics she's arguing for are. Here's the tweet:
No more conflating non-binary with actual transsexuals.
No more trans women in sports.
No more access to women's spaces for people not medically transitioning.
And a unified voice in protecting dignity and health care for actual transsexuals medically transitioning.
I'm sure some cis people, even cis people who are pretty accepting of trans people, think this sounds reasonable. If I bend over backwards to be charitable, I guess there's a version of half of these I technically agree with. Yeah, the trans people she calls "actual transsexuals"1 have different needs and experiences to some extent than non-binary people. (Although to be clear, non-binary people transition medically, too.) Yeah, it's good to protect dignity and health care for trans people who are medically transitioning.
But overall this is a political strategy of throwing people under the bus in hopes that the bigots will be mollified and not come after you. While this obviously isn't good for the people you're throwing under the bus, it's not good for you either. For one thing, this is corrosive to any trans community larger or more functional than a clique of you and your mean t-girl friends 2. For another, the bigots won't be mollified.
Lastly, what does this supposedly get you? Dignity and health care are actually things worth striving for, sure, but what kind of gatekeeping is lurking behind "actual transsexuals"? I'm not saying there's no value in seeing doctors and psychologists for what is a medically and psychologically involved process, but the kind of gatekeeping I'm talking about, which attempts to identify "actual transsexuals," is bad for everyone. Even if you emerge with a scrip for hormones in hand, the process is pretty degrading. Maybe she would forswear those parts of the process and that's what she means by "dignity." But her embrace of terms like autogynephile makes me doubt that.
And it sucks for trans women to not be allowed in women's sports! I participated in cross country as my assigned gender well before I transitioned, but it was nonetheless a good experience. If I had transitioned earlier, I'd hate to give that up or be outed by being the only girl in the boys' race. I don't run competitively anymore (and to be clear, was never an elite athlete), but I think there's something corrosive about just knowing I've disallowed from the women's division on the basis of being trans. Similarly, I'm not going to be using the restrooms in the U.S. Capitol anytime soon, but there's something a little bit degrading about the fact that I would be supposed to use the men's restroom. Wu doesn't support bathroom bills like that, but I think her (apparent) blanket exclusion of trans women from women's sports would have a similar result. And as she herself has noted, most of the people pushing bans on trans women in women's sports are not concerned for female athletes (or even cis female athletes), which is why you'll see bans for trans women in the women's division of chess. Obviously, trans female athletes, congressional staffers, and chess players are most affected by these bans, but I think it a subtler way, it affects us all.
I can't help but think back to when Brianna Wu posted about how she's not a "biological woman." If that's just a weird way of acknowledging there are differences between her and cis women broadly speaking, some of them biological, fine. But I don't think that's what she means. I'm not going to psychoanalyze or speculate about what the thought process behind that might be, but it's hard to not read it as a concession to transphobia. Suffice it to say, I refuse to have my womanhood seen as some sort of special dispensation granted by benevolent cis people.
The reason I'm writing about this is not to point out that she sucks.3 It's because I don't think she's alone. I don't think I'm going to wake up tomorrow to find that all the trans women who already have access to medical transition have joined her or anything, but people are swayed by this false promise. I'm particularly concerned about newly out trans people who haven't seen what this kind of gatekeeping represents being swayed by Wu's veneer of reasonableness. With Christian nationalism's growing power in American politics, we need to stick together more than ever.
I personally don't mind the term transsexual, at least when used by trans people, but the whole "actual transsexual" thing? Ick. No.↩
this joke is a work of fiction and any resemblance to any actual transsexuals, living or dead, is coincidental.↩
okay, it's a little cathartic to point out that she sucks.↩
input validation is old news. instead of rejecting the invalid content, you should accept it anyway, then shame the user constantly until they fix it
Tagged:
#good ideas,
#this isn't about anything
]]>
hey friends, gonna start this one off with a request: i’m currently without stable housing and without employment, and i’m in extremely dire financial straits. as of this writing my account is overdrafted and i have very limited options for fixing that. i…
Depiction of the Butlerian Jihad from the booksEarlier this year, my friend asked me to come over to her place to watch Dune Part One so she could the next day drag me to see Part Two in cinema. After that weekend, I gained a new set of worlds to explore:…