The question is probably also one of tradeoffs though—where we exist right now may be a maximum of productivity, not so much of resilience. A single failure today could cascade a series of consequences that would be much deadlier than one in a world that produces less, but more reasonably distributed (and we know that there is food that gets wasted, so it’s not like we have literally zero margin here, though of course waste itself can’t be eliminated).
dr_s
And what’s the matter with bits, anyway? Are they less important than atoms?
Arguably, yes, because they are less fundamental. A revolution in our understanding of the fundamental laws of physics begets more secondary revolutions down the line; quantum mechanics alone gave us lasers, nuclear power and those very bits—to name but a few. So from a revolution in our understanding of the world comes the promise of more revolutions once that reaches the application stage. Understanding quantum gravity might lead to warp drives. But no matter how great our ability to manipulate bits, the best they generally can do (except for the possibility of AGI, I guess) is help us squeeze more efficiency out of what we have already. We feel the energy problem especially keenly right now, of course, and far from helping all that much, computers only eat up ever more energy, sometimes in rather pointless ways. A clean, cheap source of energy right now would be worth far more than all the social media in the world.
Some of those issues though are political, not technological. Albeit it can certainly be argued that one of the causes of stagnation is that it’s the political and social institutions that have become inadequate at incentivising true innovation.
I think there’s a case for two different sources: one external, the simple lack of any further low hanging fruit to exploit, and one internal, which is exactly this—the increasing inadequacy of our institutions to create the conditions for innovation, often caused paradoxically by the excessive focus on promoting innovation. “If scientists gave us all this cool stuff by working on their own, imagine how good they will be if we hire a lot more of them and pit them into a competition for funding with each other!” was a catastrophically stupid idea, assuming humans can be made to produce highly creative work on command and like clockwork. It led to short-termism, creating a humongous confusing, amorphous mass of tiny innovations, many of which non replicable or straight up bogus, and efficiently killing off any incentive to actually work on long term, solid, sweeping discoveries.
“Where are my testicles, Summer?”
This. The deterministic prisoner’s dilemma reminds me a lot of quantum entanglement and Bell’s theorem experiments—except it doesn’t even have THAT amount of mystery, it’s just plain old correlation. If I pick two boxes, put $1000 into one, and send them both at near lightspeed in opposite directions, you’re not doing FTL signalling when you open one and find the money, thus deducing instantly that the other is empty. This is the same, but it feels weird because intelligences; however, unless you believe in supernatural source of free will (in which case CDT is the right choice regardless, and you could reasonably defect), intelligences should be subject to the same exact causal chains as boxes full with money.
VSCode has generally better code hints, though recently Sublime improved in that respect. However, VSCode also has a bad habit of getting REALLY slow when working with extremely large files, because it tries to parse them all, I guess.
Software: Godot Engine
Need: Game engine for amateur devs (especially for 2D and pixel art games)
Other programs I’ve tried: Unity, Game Maker Studio, Pygame, Love 2D
If you’re looking for a game engine that’s easy to get into and quick to produce results with, Godot is your best choice. It’s free, completely open source, has a nice variety of functionality already provided and a lot of useful object types while still providing the flexibility of coding in your own scripts. It’s lighter and more intuitive than Unity, not to mention, better suited to 2D development as it uses pixels as units and has various utilities to make pixel art games work best. But it can do 3D too! It’s cheaper than Game Maker, and has more of a UI than straight up frameworks like Pygame. It also supports shaders (both GLSL and with its own internal visual language), it exports natively to Windows, Mac, Linux, Android and HTML, and a lot of other really cool functionality, all for the great price of $0.
Supported. I use Typora for all my creative writing, it’s distraction free, does its work great, and helps me export to FF.net and AO3 really easily.
Seems to me like the problem is another. When we read poetry or look at art, we usually do so by trying to guess the internal states of the artist creating that work, and that is part of the enjoyment. This because we (used to) know for sure that such works were created by humans, and a form of communication. It’s the same reason why we value an original over a nigh-perfect copy—an ineffable wish to establish a connection to the artist, hence another human being. Lots of times this actually may result in projection, with us ascribing to the artist internal states they didn’t have, and some theories (Death of the Author) try to push back against this approach to art, but I’d say this is still how lots of people actually enjoy art, at a gut level (incidentally, this is also part of what IMO makes modern art so unappealing to some: witnessing the obvious signs of technical skill like one might see in, say, the Sistine Chapel’s frescoes, deepens the connection, because now you can imagine all the effort that went into each brush stroke, and that alone evokes an emotional response. Whereas knowing that the artist simply splattered a canvas with paint to deconstruct the notion of the painting or whatever may be satisfying on an intellectual level, but it doesn’t quite convey the same emotional weight).
The problem with LLMs and diffusion art generators is that they upend this assumption. Suddenly we can read poetry or look at art and know that there’s no intent behind it; or even if there was, it would be nothing like a human’s wish to express themselves. At best, the AIs would be the perfect mercenaries, churning out content fine-tuned to appease their commissioner without any shred of inner life poured into it. The reaction people have to this isn’t about the output being too bad or dissimilar from human output (though to be sure it’s not at the level of human masters—yet). The reaction is to the content putting the lie to the notion that the material content of the art—the words, the images—was ever the point. Suddenly we see the truth of it laid bare: the Mona Lisa wouldn’t quite be the Mona Lisa without the knowledge that at some point in time centuries ago Leonardo Da Vinci slaved over it in his studio, his inner thoughts as he did so now forever lost to time and entropy. And some people feel cheated by this revelation, but don’t necessarily articulate it as such, and prefer to pivot on “the computer is not doing REAL art/poetry” instead.
For me, after the yearly progression in 2019 and 2020, I was surprised that GPT-4 didn’t come out in 2021
Bit of an aside, but even though obviously coding is one of the jobs that was less affected, I would say that we should take into account that the unusual circumstances from 2020 onward might have impacted the speed of development of any ongoing projects at the time. It might not be fair to make a straight comparison. COVID froze or slowed down plenty of things, especially in mid to late 2020.
It’s ordinary computational complexity reasoning: if a part of your program scales like n^2, and another like n, then for large enough n the former will overtake the latter and pretty much dominate the total cost. That said, as someone pointed out, the specifics matter too. If your total cost was something like n^2+1,000,000,000n, it would take a very big n for the quadratic term to finally make itself felt properly. So depending on the details of the architecture, and how it was scaled up in ways other than just increasing context, the scaling might not actually look very quadratic at all.
Can we really be sure there is not a shred of inner life poured into it?
Kind of a complicated question, but my meaning was broader. Even if the AI generator had consciousness, it doesn’t mean it would experience anything like what a human would while creating the artwork. Suppose I gave a human painter a theme of “a mother”. Then the resulting work might reflect feelings of warmth and nostalgia (if they had a good relationship) or it might reflect anguish, fear, paranoia (if their mother was abusive) or whatever. Now, Midjourney could probably do all of these things too (my guess in fact is that it would lean towards the darker interpretation, it always seems to do that), but even if there was something that has subjective experience inside, that experience would not connect the word “mother” to any strong emotions. Its referents would be other paintings. The AI would just be doing metatextual work; this tends to be fairly soulless when done by humans too (they say that artists need lived experience to create interesting works for a reason; simply churning out tropes you absorbed from other works is usually not the road to great art). If anything, considering its training, the one “feeling” I’d expect from the hypothetical Midjourney-mind would be something like “I want to make the user satisfied”, over and over, because that is the drive that was etched into it by training. All the knowledge it can have about mothers or dogs or apples is just academic, a mapping between words and certain visual patterns that are not special in any way.
Do you mean that you expect OpenAI deliberately wrote training examples for GPT based on Gary Marcus’s questions, or only that because Marcus’s examples are on the internet and any sort of “scrape the whole web” process will have pulled them in?
Surely Column B, and maybe a bit of Column A. But even if the researchers didn’t “cheat” by specifically fine-tuning the model on tasks they had someone helpfully point out that it failed at, I think the likelihood of the model picking up on the same exact pattern that appeared once verbatim in its training set isn’t zero. So something to test would be to diversify the problems a bit along similar lines to see how well it generalizes (note that I generally agree with you on the goalpost moving that always happens with these things; but let’s try being rigorous and playing Devil’s advocate when running any sort of test with a pretence of scientificity).
Job Board (28 March 2033)
I had never heard about the salt making pasta cook faster. I know that some people only add salt when the water is on the point of boiling because this makes the boiling faster (which is also true but a negligibly tiny effect: adding salt before just increases the boiling point, meanwhile adding it when the past is already about to boil breaks surface tension and adds nucleation centres which precipitate the formation of bubbles).
The complete lack of rational cost benefit analysis (across the political spectrum) for the various measures was truly disheartening.
To be fair there also was an environment of high uncertainty in which making good CBA without data was simply impossible, and calling for extensive and unrealistic standards of proof was a common dithering technique from people who simply didn’t want anything done on principle. We still lack good enough data on transmission properties to e.g. estimate properly the benefits of ventilation in reducing it, three years into this. I was honestly baffled that the first thing done in March 2020 wasn’t to immediately estimate in multiple experiments how long the virus stayed in the air or on surfaces still viable, how much exposure caused infection, etc. We had like a couple of papers at best, and not very good ones, apparently because it was hard to do the experiments and they required a P4 lab or so. Meanwhile the same researchers would probably encounter the virus daily at the grocer.
I think the problem here is also forced choices, which were in themselves loaded on purpose. If I tell you the two choices are “let COVID spread unimpeded” or “lock down without any support mechanism so that the economy crashes so hard it kills more people than COVID would” (honestly I don’t think we actually fared that bad in western countries though, pre-vaccine COVID really would have killed a fuckton of people if it got in full swing), then I’m already loading the choice. Many politicians did this because essentially they were so pissy about having to do something that ran counter to their ideological inclinations that they took a particular petty pleasure in doing it as badly as they could, just to drive home that it was bad. This is standard “we believe the State is bad, that is why we will get in charge of the State and then manage it like utter fuckwits to demonstrate that the State is bad” right wing libertarian-ish behaviour. Boris Johnson and the British Tories in particular are regularly culpable of this, and were so during the pandemic as well.
A third, saner option would have been “close schools, make it legally mandated for every employer who can allow work from home to make their people work from home, implement these and those security measures for those who otherwise can’t, close businesses like restaurants and cinemas temporarily, then tax (still temporarily!) the increased earnings of those citizens and companies who are less affected by these measures to pay for supporting those businesses and people that are more affected so that the former don’t go bankrupt, and the latter don’t starve”. You know, an actual coordinated action that aims at both minimizing and spreading fairly the (inevitable) suffering that comes with being in a pandemic. Do that on and off while developing testing and tracing capacity as well as all sorts of mitigation measures, and until a vaccine is ready. Then try to phase out to a less emergency mode, more sustained regime that however still manages infection rates and their human and economic costs seriously.
We… really didn’t get that. But the original sin was IMO mainly in the way pandemic plans were already laden with ideology from the get go. The whole “let it rip” thing the UK tried before desperately backtracking? That WAS our official pandemic plan. Designed for flu rather than a coronavirus with twice the R0, admittedly, but still. The best idea they could come up with was “do nothing, but pretend it’s on purpose to look more clever”, essentially, because everything else felt inadmissible as it impinged on this or that interest or assumption that couldn’t possibly be broken. As it turns out, the one thing that plan underestimated, for all its pretences of being a masterpiece of grounded realpolitik, was the pressure from people not wanting to get sick or die. Who could have guessed. As such, the backlash and following measures like lockdowns were implemented in a rush and thus very poorly and without real plans or coordination. Might have helped to see that coming first.
From what I understand, the reason has to do with GDPR, the EU’s data protection law. It’s pretty strict stuff and it essentially says that you can’t store people’s data without their active permission, you can’t store people’s data without a demonstrable need (that isn’t just “I wanna sell it and make moniez off it”), you can’t store people’s data past the end of that need, and you always need to give people the right to delete their data whenever they wish for it.
Now, this puts ChatGPT in an awkward position. Suppose you have a conversation that includes some personal data, that gets used to fine tune the model, then you want to back out… how do you do that? Could the model one day just spit out your personal information to someone else? Who knows?
It’s problems with interpretability and black box behaviour all over again. Basically you can’t guarantee it won’t violate the law because you don’t even know how the fuck it works.
Uneasily looks at CO2 concentration and global average temperatures graphs for the last decades
The problem is also that in this sense the interests of the civilisation—as a single entity—and the interests of the individuals inside can be dramatically misaligned. Shocks, well, tend to kill people in droves, and make others’ lives miserable (while, arguably, in some cases, they probably also do make some lives better, if the fallen system was oppressive to them). Just like mass extinctions create biodiversity and reshuffle the genetic deck in the long term, but also kill a lot of living beings. So the hypothetical solution for the steady state future in which shocks aren’t a thing any more—how do we just allow for some slack to exist anyway? - are relevant before it too, because they could be applied to give some ability to a civilisation to drift away from its path of pure optimisation, sacrificing productivity in the name of resilience and flexibility. Which might prevent those shocks in the first place, when they’re caused by the civilisation’s own stupid rigidity. If we’re able to learn to do that, then we can do better now and do better in that future. If we can’t, we’re doomed to be at the mercy of catastrophes of our own creation now, and if we ever make it to that future at all (having survived the dangers of climate change, nuclear weapons, nanotechnology, AI, and who knows what else that we could totally turn on ourselves through that sort of sheer civilisation-wide single-mindedness), then we’re doomed to fall prey to perfect optimisation and just do the same dull things forever. Which by the way is something I think we’re worryingly seeing signs of in the political sphere. While we don’t have the power to control the forces of nature, we do have increasingly more power to control other humans, and that power is being turned more and more towards enforcing homogeneity and paralysing change by those who hold it. See for example China’s experiment in social scores and such. A people rising up in revolution—the political/human equivalent of a shock—is always less likely, because of both weapon technology and because of these methods of micromanagement of consensus and dissent. You could easily end up with perfectly stable technocratic or authoritarian governments that just do nothing except perpetuate themselves, bringing about another form of stasis that can only harm humanity long term.