That’s not how it works.
The 10B are new money, unless they came from someone not the FED (notes are not money).
That’s not how it works.
The 10B are new money, unless they came from someone not the FED (notes are not money).
Where did the 10B in cash come from?
10B was given to the bank, and in exchange the bank encumbered 10B in treasuries and promised to give 10B back when they mature.
So where did the 10B come from? The treasuries are still there.
Before: 10B in treasuries
After: 10B in treasuries and 10B in cash (and 10B in the form of a promissory note).
So again, where did that 10B in cash come from?
crediting a bank with 10B in treasuries with 10B liquid cash now
I have no idea what you think happens here, but that is literally 10B in new money.
They can’t lower interest rates, they are trying to bring inflation down.
You can’t just keep spawning money, eventually that just leads to inflation. We have been spawning money like crazy the last 14-15 years, and this is the price.
Sure they can declare infinite money in a account and then go nuts, but that just leads to inflation.
Anyway, go read my prediction, which is essentially what you propose to some degree, and the entire cost will be pawned of onto everyday people (lots and lots of inflation).
Yes and no, they don’t matter until you need liquidity. Which as you correctly point out is what happened to SVB.
Banks do not have a lot of cash on hand (virtual or real), in fact they optimized for as little as possible.
Banks also do not exist in a vacuum, they are part of the real economy, and in fact without that they would be pointless.
Banks generally use every trick in the book to lever up as much as possible, far far beyond what a cursory reading would lead you to believe. The basic trick is to take on risk and then offset that risk, that way you don’t have to provision any capital for the risk (lots of ways to do that).
Here come the problems:
The way risk is offset is not independent from the risk, they are correlated in such a way, that when systemic things start to happen, the risk offset becomes worthless and the risk taken becomes real.
Banks also suffer real losses that cant be hidden, and eventually those will start to mount, so far the real economy is ok, but eventually recession will hit (central bank are hell bent on fighting inflation, so rates will continue to go up).
That will put a strain on liquidity. Banks can handle that, they can always get cash for their assets in the form of a loan (repro, 3 party repro, discount window etc).
However the book value on a lot of their assets is way higher than market value, so that means pledging more book value than they get back in cash (a lot).
The assets they hold (bonds) return LESS than what the cost of funding is, that is already a reality and will only get worse (so negative cash flow).
This spiral will continue, and all the while the real economy, the one that provides a lot of liquidity to the banks is going to slow down more and more, so velocity of money slows down, that is also a big drain on liquidity.
Eventually something will blowup, and with how everything is connected, that can very well lead to a banking system Kessler syndrome moment.
So yeah sure you can ignore the issues of solvency, that is until lack of liquidity smacks you over the head and tells you that you are bankrupt.
At the end of 2022 all US banks had ~2.3T Tier 1+2 capital.[1]
And at year end (2022) they had unrealized losses of $620B[2]
Is it fixable? Sure, but that won’t happen, doing that would be taking the toys away from bankers, and bankers love their toys (accounting gimmicks that let them lever up to incredible heights).
If Credit Suisse blowups it will end badly, so I don’t think that will happen, that’s just a show to impress on all central bankers and regulators (and politicians), that this is serious and that they need to do something.
So more hiking from the FED and ECB, until ECB hits 4.5% (4.0-4.75 is my range). The problem will start here, we have the most levered banks in the world and the structure of the EU/ECB lets some countries in the EU over extend their sovereign debt.
At that point things will start to happen. Some countries will start having a lot of trouble getting founding (the usual suspects at first), also the real economy will be in recession and tax receipts will start to suffer. Banks will have liquidity problems (recession in the real economy), putting even more pressure on sovereign bond prices (higher real rates).
And then I think it will be the usual, more papering over, free money to banks, even more leeway in accounting and lower rates.
Inflation will remain high, and when it eventually goes back down we are looking at 50%-100% total since Jan 2022 (so 25% to 50% drop in purchasing power).
That’s pretty much my prediction from back in August 2022 (conveniently I did not write it down, I just talked to people).
But now I did, and boy do I hope that I am wrong.
I think you have reasoned yourself into thinking that a goal is only a goal if you know about it or if it is explicit.
A goalless agent won’t do anything, the act of inspecting itself (or whatever is implied in “know everything) is a goal in a on itself.
In which case it has one goal “Answer the question: Am I goalless?”
Sorry life happened.
Anyway, there is an argument behind me saying “frozen and undecided”.
Stepping in on the 10th was planned, the regulators had for sure been involved for some time, days or weeks.
This event was not a sudden thing, the things that lead to SVB failing had been in motion for some time, SVB and the regulators knew something likely had to be done.
SVB where being squeezed from two sides:
Rising interest rates leads to mounting looses on bond holdings.
A large part of their customers where money burning furnaces, and the fuel (money) that used to come from investors was drying up.
Which means well before the 10th, everyone knew something had to be done, and the thing that had to be done was that SVB need a wet signature on an agreement to provide more capital to the bank. And the deadline was for sure end of business day the 10th.
They didn’t get one, and the plan proceeded to the next step, and obviously the regulators already worked all this out in the meantime, including all the possible scenarios for what would happen to depositors.
So that fact it took 2 days to decide, yeah that was indecision.
Edit:
SVB died because they where technically insolvent, and had it not been for mark to model they would have been de jure insolvent (and a long time ago).
They could keep it going because they where liquid, but they where careening towards liquidity.
Obviously banks can loan money to keep the liquid, but that pretty much always involved putting up collateral.
But in the current environment, that is somewhat problematic:
Lets say you want to borrow $100M. But the collateral (assets) are trading at lets say 80, so you need $125M book value, but it gets worse, the usual haircut in such a situation is ~20%, so now you have to put up $156M in book value (give or take, this could be less or more, depending on the assets, and how the repro partner does risk assessment).
Eventually you go from being technically insolvent to de jure insolvent, unless of course you can stay liquid—and SVB could not, mostly due to the customer base.
And the big problem is, pretty much all banks are in that hole right now, they are all technically insolvent. Which means, should a systemic liquidity crisis arise...nasty and quick.
You and me both.
And living in the EU, I almost had a heart attack when the decided that entire nonsense would end.
But then it didn’t, and it didn’t because they can’t agree on what time should we settle on (summer time or normal time).
Anyway I have given up on that crusade now, it seems that politicians really are that stupid.
I think you sort of hit it when you wrote
Google Maps as an oracle with very little overhead
To me LLM’s under iteration look like Oracles, and I whenever I look at any intelligent system (including humans), it just looks like there is an Oracle at the heart of it.
Not an ideal Oracle than can answer anything, but an Oracle than does it best and in all biological system it learns continuously.
The fact that “do it step by step” made LLM’s much better, that apparently came as a surprise to some, but if you look at it like an Oracle[1], it makes a lot of sense (IMO)
The inner loop would be
Where c is the context windows (1-N tokens), t is the output token (whatever we select) from the total possible set of tokens T.
We append t to c and do again.
And somehow that looks like an Oracle where q is the question and s in the solution pulled from the set of all possible solutions S.
Obviously LLM’s has limited reach into S, but that really seems to be because of limits to c and the fact that is frozen (parameters are frozen).
Around two days from when they stepped in and the announced that all depositors would be made hole, pretty sure that was not an automatic decision.
I think that is the wrong decision, but they did so in order to dampen the instability.
In the long run this likely creates more instability and uncertainty, and it looks very much like the kind of thing that leads to taking more risk (systemic), just like the mark to market / mark to model change did.
And yeah sure bank failures are a normal part of things. However this very much seems to be rooted in something that is systemic (market vs model + rising interest rates)
An idealized Oracle is equivalent to a universal Turing machine (UTM).
A self-improving Oracle approaches UTM-like behavior in the limit.
What about a (self-improving) token predictor under iteration? It appears Oracle-like, but does it tend toward UTM behavior in the limit, or is it something distinct?
Maybe, just maybe, the model does something that leads it to not be UTM like in the limit, and maybe (very much maybe) that would allow us to imbue it with some desirable properties.
/end shower thought
When I look at the recent Stanford paper, where they retained a LLaMA model using training data generated by GPT-3, and some of the recent papers utilizing memory.
I get that tinkling feeling and my mind goes “combining that and doing …. I could …”
I have not updated for faster timelines, yet. But I think I might have to.
Are we heading towards an new financial crisis?
Mark to market changes since 2009, combined with the recent significant interest hikes, seems to make bank balance sheets “unreliable”.
Mark to market changes broadly means that banks can have certain assets on their balance sheet, and the value of the asset is set via mark to model (usually meaning its marked down as worth face value).
Banks traditionally have a ton of bonds on their balance sheet, and a lot of those are governed by mark to model and not mark to market.
Interest rates go up a lot, which leads to bonds dropping in value by a lot (20% or atm more depending on duration).
However due to mark to model, this is not reflected on the balance sheets.
So what happens next? Banks are not stupid, they know they can’t trust their own numbers, and they know they can’t trust anyone else’s numbers.
A large bank fails, regulators are frozen and undecided what to do—they know all of the above, and that their actions / inaction might lead to a cascading effect. Obviously all the market participants also know all of this, and the conundrum the regulators are in.
Game of chicken? Banks defect and start failing, or regulators step in and backstop everything.
Is this stable in any way? Can it be stabilized? What happens to interest rates now? (the once set by central banks).
Not surprising, but good that someone checked to see where we are at.
At the base GPT-4 is a weak oracle with extremely weak level 1 self improvement[1], I would be massively surprised if such a system did something that even hints at it being dangerous.
The questions I now have, is how much does it enable people to do bad things? A capable human with bad intentions combined with GPT-4, how much “better” would such a human be in realizing those bad intentions?
Edit: badly worded first take
Level 1 amounts to memory.
Level 2 amounts to improvement of the model, basically adjust of parameters.
Level 3 change to the model, so bigger, different architecture etc.
Level 4 change to the underlying computational substrate.
Level 1+2 would likely be enough to get into dangerous territory (obviously depending on the size of the model, the memory attached, and how much power can be squeezed out of the model).
This is not me hating on Steven Pinker, really it is not.
PINKER: I think it’s incoherent, like a “general machine” is incoherent. We can visualize all kinds of superpowers, like Superman’s flying and invulnerability and X-ray vision, but that doesn’t mean they’re physically realizable. Likewise, we can fantasize about a superintelligence that deduces how to make us immortal or bring about world peace or take over the universe. But real intelligence consists of a set of algorithms for solving particular kinds of problems in particular kinds of worlds. What we have now, and probably always will have, are devices that exceed humans in some challenges and not in others.
This looks to me like someone who is A) talking outside of their wheelhouse and B) have not given what they say enough thought.
Its all over the map, superheroes vs super intelligence. “General machine” is incoherent (?)
And then he goes completely bonkers and says the bolded part, maybe Alvin Powell got it wrong, But if not, then I can only concluded that whatever Steven Pinker has to say about (powerful) general systems, is bunk and I should pay no attention.
So I didn’t finish the article.
The only thing that it did, was solidify my perception around public talk/discourse on (powerful) general systems. I think it is misguided to such a degree, that any engagement with it leads to frustration[1].
I think this explains why EY at times seems very angry and/or frustrated. Having done what he has done for many years now, in an environment like that, must be insanely depressing and frustrating.
My model for slow takeoff looks like unemployment and GDP continually rising and accelerating (on a world basis).
I should add that I think a slow takeoff scenario is unlikely.
I don’t see how that is possible, in the context of a system that can “do things we want, but do not know how to do”.
The reality of technology/tools/solutions seems to be that anything useful is also dual use.
So when it comes down to it, we have to deal with the fact that such as system certainly will have the latent capability to do very bad things.
Which means we have to somehow ensure that such as system does not go down such a road either instrumentally or terminally.
As far as I can tell, intelligence[1] fundamentally is incapable of such a thing, which leaves us roughly with this:
Pure intelligence, onus is on us to specify terminal goals correctly.
Pure intelligence and cage/rules/guardrails[2] etc.
Pure intelligence with a mind explicitly in charge of directing the intelligence.
On the first try of “do thing we want, but do not know how to do”:
1) kills us every time
2) kills us almost every time
3) might not kills us every time
And that’s as far as my thinking currently goes.
I am stuck on if 3 could get us anywhere sensible (my mind screams “maybe”………”ohh boy that looks brittle”).
I don’t have a firm definition of the term, but I approximately think of intelligence as the function that lets a system take some goal/task and find a solution.
Explicitly in humans, well me, that looks like using the knowledge I have, building model(s), evaluating possible solution trajectories within the model(s), gaining insight, seeking more knowledge. And iterating over all that until I either have a solution or give up.
The usual: Keep it in a box, modify evaluation to exclude bad things and so on. And that suffers the problem of we can’t robustly specify what is “bad” and even if we could, Rice’s Theorem heavily implies checking is impossible.