Expression. Civics. Game design. http://aboutmako.makopool.com
I came across the advice “assume that build might get called every frame” just today, and ah shit okay I might understand what’s happening, it’s doing the whole comment sorting algorithm each frame during transition animations (this doesn’t explain laggy scrolling though). Incidentally I was just coding up another view that stows query results instead of regenerating them each time.
Why the hell is it doing this in page transition animations though. The layout of the widgets in the next page doesn’t change during the animation and it would be a terrible transition animation if they did.
This suggestion was really helpful btw, over the past couple of weeks I’ve been trying developing a mockup of tasteweb in Flutter.I noticed you were the OP of a reddit thread asking for examples of flutter desktop apps. That thread was *also* helpful to me, lead me to try authpass’s app, which performed extremely well on my linux box, informing me that actually flutter is pretty performant and the performance problems I’m having are unique to my project/build config. Ugh. Still don’t know what to do. But at least I know it’s not flutter itself now.
Even just scrolling is horrifically laggy.
There was an invite chain proposed in Lesswrongers Slack, I don’t know if it got running at the time but the comments are still there in #open
Well I’m not sure there’s any reason to think that we can tell, by looking at the mathematical idealizations, that the inductive parts will take about the same amount of work to create as the agentic parts, just because the formalisms seem to weigh similar amounts (and what does that seeming mean?). I’m not sure our intuitions about the weights of the components mean anything.
Wondering whether Integrated Information theory dictates that most anthropic moments have internet access
Hm, to clarify, by “consciously” I didn’t mean experiential weight/anthropic measure, in this case I meant the behaviors generally associated with consciousness: metacognition, centralized narratization of thought, that stuff, which I seem to equate to deliberateness.. though maybe those things are only roughly equivalent in humans.
I’m not aware of a technical definition of “general inductor”. I meant that it’s an inductor that is quite general.
My opinion is that the St Petersberg game isn’t paradoxical, it is very valuable, you should play it, it’s counterintuitive to you because you can’t actually imagine a quantity that comes in linear proportion to utility, you have never encountered one, none seems to exist.
Money, for instance, is definitely not linearly proportionate to utility, the more you get the less it’s worth to you, and at its extremes, it can command no more resources than what the market offers, and if you get enough of it, the market will notice and it will all become valueless.
Every resource that exists has sub-linear utility returns in the extremes.
(Hmm. What about land? Seems linear, to an extent)
found a 3d hangout platform, might be worth a look https://www.q42.nl/en/work/mibo-app
Regarding artificial sunlight: a technology that imitates it shockingly well in many ways, giving a sense of a window to a light source with infinite distance: https://www.coelux.com/en/about-us/index
but you sound exactly like the kind of person we want to attend
What, but I’m just a stray dog who makes video games about -… [remembers that I am making a game that centers around an esolang. Turns and looks at my BSc in formal languages and computability. Remembers all of the times a layperson has asked whether I know how to do Hacking and I answered “I’m not really interested in learning how to break things. I’m more interested in developing paradigms where things cannot be broken”]… oh.
(If you think the question is too underspecified to answer, you probably shouldn’t try to post an answer in the answers section. There is a comments section.)
(I’ll try to work this into the question description)
Are you asking about which kinds of attacks can’t be stopped by improving software?
That would be an interesting thing to see discussed, sure.
Or are you asking about the theoretical limits of PL technology?
No, that might be interesting from the perspective of.. what kinds of engineering robustness will exist at the limits of the singularity (this topic is difficult to motivate, but I hope the reader would agree that we should generally try to make forecasts about cosmically large events even when the practical reasons to do it are not obvious. It seems a-priori unlikely that questions of, say, what kinds of political arrangements are possible in a post-organic galaxy sized civilization wont turn out to be important in some way, even if we can’t immediately see how.)
But I’m mainly wondering from a practical perspective. Programming languages are generally tools of craft, they are for getting things done in reality, even many of the most theoretical languages aspire to that. I’m asking mainly from a perspective of...
Can we get the engineers of this imperiled civilization to take their responsibilities more seriously, generally? When it’s helpful to be able to prove that something will work reliably before we put it into production, can we make that easy to do? Can any of the tooling design principles from that be generalized to the AGI alignment problem?
With regards to Coq, will those languages actually be used in reality, why or why not, should promote them, should we should fund their development?
I’m interested in different timescales
the most security-promoting development processes that are currently in wide use.
the most security-promoting development processes that are possible with recently developed technology.
processes that could be developed now.
processes that could come to exist 10 years away; processes that might exist 30-50 years from now.
perhaps some impossibility theorems that may bind even the creatures of the singularity.
Yeah, this is actually one of the key takeaways of the arpa parc paper, leadership’s role isn’t so much to control or to make very many decisions, their job is to keep everyone lined up with a shared vision so that their actions and decisions fit together. Alignment is the thing that makes organizations run well, it’s very important.
abiogenesis being so early on Earth is 100% survivorship bias
Being early on earth was not necessary for survival. Similarly, being early for the formation of stars of suitable temperatures also wasn’t especially favored by anthropics. Neither of those things had to happen.
^_^,.. I might have to curb your excitement a bit and mention that the reason I know about trustnet is that I’ve known cblgh for years, and that I wrote most of this before reading any of his writing.And we still haven’t really gotten around to reconciling our design thoughts. I think most of this post would be boring to him. Hmm. To do that I’ll have to write a bit about
how it is feasible for chat and forums to converge
More illustrations of UIs for doing that.
the dire civilizational need for a wot-moderated forum
Maybe I should talk to my friend Demi about designing it all to be humane, first, instead of just having potentially humane underlying systems, because I’ve been totally neglecting that dimension of it and it’s their specialty. This would probably aid the motivation problems I’ve been having. Right now it feels difficult to envision a humane content discovery system.
eschew contextualizing because it ruins the commons
I don’t understand. What do you mean by contextualizing?
Did Bostrom ever call it singleton risk? My understanding is that it’s not clear that a singleton is more of an x-risk than its negative; a liberal multipolar situation under which many kinds of defecting/carcony factions can continuously arise.