Just this guy, you know?
Dagon
Most of these kinds of posts should start with Woody Allen’s 1979 quote:
More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.
Agreed, but it’s not just software. It’s every complex system, anything which requires detailed coordination of more than a few dozen humans and has efficiency pressure put upon it. Software is the clearest example, because there’s so much of it and it feels like it should be easy.
I think this leans a lot on “get evidence uniformly over the next 10 years” and “Brownian motion in 1% steps”. By conservation of expected evidence, I can’t predict the mean direction of future evidence, but I can have some probabilities over distributions which add up to 0.
For long-term aggregate predictions of event-or-not (those which will be resolved at least a few years away, with many causal paths possible), the most likely updates are a steady reduction as the resolution date gets closer, AND random fairly large positive updates as we learn of things which make the event more likely.
I kind of see what you’re saying, but I also rather think you’re talking about specifying very different things in a way that I don’t think is required. The closer CS definition of math’s “define a sorted list” is “determine if a list is sorted”. I’d argue it’s very close to equivalent to the math formality of whether a list is sorted. You can argue about the complexity behind the abstraction (Math’s foundations on set theory and symbols vs CS library and silicon foundations on memory storage and “list” indexing), but I don’t think that’s the point you’re making.
When used for different things, they’re very different in complexity. When used for the same things, they can be pretty similar.
It’s fascinating (and a little disturbing and kind of unhelpful in understanding) how much steering and context adjustment that’s very difficult in older/smaller/weaker LLMs becomes irrelevant in bigger/newer ones. Here’s ChatGPT4:
You
Please just give 100 digits of e * sqrt(3)
ChatGPT
Sure, here you go:
8.2761913499119 7879730592420 6406252514600 7593422317117 2432426801966 6316550192623 9564252000874 9569403709858
“Mathematical descriptions” is a little ambiguous. Equations and models are terse. The mapping of such equations to human-level system expectations (anticipated conditional experiences) can require quite a bit of verbosity.
I think that’s what you’re saying with the “algorithms and data structures” part, but I’m unsure if you’re claiming that the property specification of the math is sufficient as a description, and comparable in fidelity to the algorithmic implementation.
Wild guesses here. I’ve done work in optical product identification, but I don’t know how well those challenges translate. Also, it’s an obvious enough idea that I expect there are teams working on it.
Lens and CCD technology is not trivial at those speeds and insane angular resolution. It’s not just about counting pixels, it’s about how to get light to the exact right place on the sensor, for long enough to register. I honestly don’t know if that’s solvable.
More boringly, clouds and nighttime would make this much less useful, especially as enemies can plan missions around the expected detection capabilities. I haven’t done the math, but even on clear days in daytime, dust and haze likely interfere too much for even a few KM distance.
[note: I suspect we mostly agree on the impropriety of open selling and dissemination of this data. This is a narrow objection to the IMO hyperbolic focus on government assault risks. ]
I’m unhappy with the phrasing of “targeted by the Chinese government”, which IMO implies violence or other real-world interventions when the major threats are “adversary use of AI-enabled capabilities in disinformation and influence operations.” Thanks for mentioning blackmail—that IS a risk I put in the first category, and presumably becomes more possible with phone location data. I don’t know how much it matters, but there is probably a margin where it does.
I don’t disagree that this purchasable data makes advertising much more effective (in fact, I worked at a company based on this for some time). I only mean to say that “targeting” in the sense of disinformation campaigns is a very different level of threat from “targeting” of individuals for government ops.
I don’t have confidence in my models of how coherent and competent governments are at getting and using data like this. The primary buyers of location data are advertisers and business planners looking for statistical correlations for targeting and decisions. This is creepy, but not directly comparable to “targeted by the Chinese government”.
My competing theories of “targeted by the Chinese government” threats are:
they’re hyper-competent and have employee/agents at most carriers who will exfiltrate needed data, so stopping the explicit sale just means it’s less visible.
they’re as bureaucratic and confused as everything else, so even if they know where you are, they’re unable to really do much with it.
I think the tension is what does it even mean to be targeted by a government.
Moral weights depend on intensity of conscient experience.
Wow, that seems unlikely. It seems to me that moral weights depend on emotional distance from the evaluator. For some, they’re able to map intensity of conscious experience to emotional sympathy (up to a point; there are no examples and few people who’ll claim that somthing that thinks faster/deeper than them is vastly more important than them).
Just to focus on the underlying tension, does this differ from noting “all models are wrong, some models are useful”?
an AI designer from a more competent civilization would use a principled understanding of vision to come up with something much better than what we get by shoveling compute into SGD
How sure are you that there can be a “principled understanding of vision” that leads to perfect modeling, as opposed to just different tradeoffs (of domain, precision, recall, cost, and error cases)? The human brain is pretty susceptible to adversarial (both generated illusion and evolved camoflage) inputs as well, though they’re different enough that the specific failures aren’t comparable.
I tend to read most of the high-profile contrarians with a charitable (or perhaps condescending) presumption that they’re exaggerating for effect. They may say something in a forceful tone and imply that it’s completely obvious and irrefutable, but that’s rhetoric rather than truth.
In fact, if they’re saying “the mainstream and common belief should move some amount toward this idea”, I tend to agree with a lot of it (not all—there’s a large streak of “contrarian success on some topics causes very strong pressure toward more contrarianism” involved).
Hmm. I don’t doubt that targeted voice-mimicking scams exist (or will soon). I don’t think memorable, reused passwords are likely to work well enough to foil them. Between forgetting (on the sender or receiver end), claimed ignorance (“Mom, I’m in jail and really need money, and I’m freaking out! No, I don’t remember what we said the password would be”), and general social hurdles (“that’s a weird thing to want”), I don’t think it’ll catch on.
Instead, I’d look to context-dependent auth (looking for more confidence when the ask is scammer-adjacent), challenge-response (remember our summer in Fiji?), 2FA (let me call the court to provide the bail), or just much more context (5 minutes of casual conversation with a friend or relative is likely hard to really fake, even if the voice is close).
But really, I recommend security mindset and understanding of authorization levels, even if authentication isn’t the main worry. Most friends, even close ones, shouldn’t be allowed to ask you to mail $500 in gift cards to a random address, even if they prove they are really themselves.
In deep meditation people become disconnected from reality
Only metaphorically, not really disconnected. In truth, in deep meditation, the conscious attention is not focused on physical perceptions, but that mind is still contained in and part of the same reality.
This may be the primary crux of my disagreement with the post. People are part of reality, not just connected to it. Dualism is false, there is no non-physical part of being. The thing that has experiences, thoughts, and qualia is a bounded segment of the universe, not a thing separate or separable from it.
Is your mind causally disconnected from the actual universe? That’s the only way I can understand the merging of minds that share some similarities (but are absolutely not identical across universes that aren’t themselves identical). Your forgetting may make two possible minds superficially the same, but they’re simply not identical.
I don’t know why you think path-based configuration of brain state would be false. That may not be “identity” for all purposes—there may be purposes for which it doesn’t suffice or is too restrictive, but it’s probably good for this case.
I expect what the right call is to be very different from person to person and, for some people, from situation to situation.
Definitely. And the balance changes as one ages as well. For me, there are some kinds of work where it’s very hard to get into the zone, and the cost of an interruption is very high. However, I just get less effective over long sessions, and this has gotten much worse in the last few decades. So the point of indifference between “I may not be able to recover this mind-state tomorrow” and “I may not be that useful tonight, and may not be good for ANYTHING tomorrow” has shifted.
I would recommend trying it at least a few times each year, in both directions. Don’t ever make one or the other the only option for yourself—it’s always a choice.
If you have the memories of every single human up to that point, then you don’t know which of them you are.
This depends on the mechanism of attaining all these memories. In that world, it COULD be that you still know which memories are privileged, or at least which ones include meeting God and being in position to be asked the question.
I mean, I’m with you fundamentally: it’s not obvious that ANYTHING is truly objective—other people can report experiences, but that’s mediated by your perceptions as well. In most cases, one can avoid the confusion by specifying predicting WHAT experiences will happen to WHICH observer.
My recommended way to resolve (aka disambiguate) definitional questions is “use more words”. Common understandings can be short, but unusual contexts require more signals to communicate.
I actually upvoted, but mostly because it was a hook for comedy, because it’s so common a trope (the surprise value of taking something literally). If it weren’t for that, I’d probably have just passed, rather than downvoting, but I find it pretty low-value overall.
Some mix of “obvious parts are obvious, non-obvious parts are some mix of pretentious and and suspect.” I’d actually enjoy a (somewhat) deeper exploration of your agreement or disagreement with the Wittgenstein framing of this phrase, and the value of invoking cultural tropes. Personally, this isn’t one I’m confident enough to use, but there are other hyperbolic ideas I use for emphasis or humor, and I generally agree that communication is multimodal and contextual, much more than objective semantic content.
[ I don’t consider myself EA, nor a member of the EA community, though I’m largely compatible in my preferences ]
I’m not sure it matters what the majority thinks, only what marginal employees (those who can choose whether or not to work at OpenAI) think. And what you think, if you are considering whether to apply, or whether to use their products and give them money/status.
Personally, I just took a job in a related company (working on applications, rather than core modeling), and I have zero concerns that I’m doing the wrong thing.
[ in response to request to elaborate: I’m not going to at this time. It’s not secret, nor is my identity generally, but I do prefer not to make it too easy for ’bots or searchers to tie my online and real-world lives together. ]