Proof-of-stake is still wasteful since it promotes pump and dump scams and causes people to waste their money on scam projects. If the creators are able to get their reward at the very beginning of a project, they will be more interested in short-term gains rather than a long-term token that will last. Humans are not psychologically/socially equipped to invest in proof-of-stake cryptocurrencies since they tend to get scammed.
Joseph Van Name
Bitcoin mining is a real-world example of a goal that people spend an enormous amount of resources to attain, but this goal is useless or at least horribly inefficient.
Recall that the orthogonality thesis states that it is possible for an intelligent entity to have bad or dumb goals and that it is also possible for a not-so-intelligent entity to have good goals. I would therefore consider Bitcoin mining to be a real-world prominent example of the orthogonality thesis as it in a sense a dumb goal attained intelligently (though, this example is imperfect).
Bitcoin’s mining algorithm consists of computing many SHA-256 hashes relentlessly. The Bitcoin miners are rewarded whenever they compute a suitable SHA-256 hash that is lower than the target. These SHA-256 hashes establish decentralized consensus about the state of the blockchain, and they distribute newly minted bitcoins. But besides this, computing so many SHA-256 hashes is nearly useless. Computing so many SHA-256 hashes consumes large quantities of energy and creates electronic waste.
So what are some of the possible alternatives to Bitcoin mining? It seems like the best alternative that does not significantly change the nature of Bitcoin mining would be to replace SHA-256 mining with some other mining algorithm that serves some scientific purpose.
This is more difficult than it seems because Bitcoin mining must satisfy a list of cryptographic properties. If the mining algorithm did not satisfy these cryptographic problems, then it might not be feasible for newly minted bitcoins to be dispersed every 10 minutes, and we may enter a scenario where a single entity with a secret algorithm or slightly faster hardware were to put all the blocks on the blockchain.
Since Bitcoin mining must satisfy a list of cryptographic properties, it is difficult to come up with a more scientifically useful mining algorithm that satisfies these cryptographic properties. But in science, if there is a difficult problem, people should perform research on this scientific problem. While finding a useful cryptocurrency mining algorithm has its challenges, cryptocurrency mining algorithms are easy to produce since they can be made from cryptographic hash functions without requiring public key encryption or other advanced cryptographic algorithms, so difficulty seems more like an excuse rather than a legitimate reason not to investigate useful cryptocurrency mining algorithms. The cryptocurrency sector does not want to perform this research. I can think of several reasons why people refuse to support this sort of endeavor despite the great effort that people put into Bitcoin mining, but none of these reasons justify the lack of interest in useful cryptocurrency mining.
The diminishing quality of cryptocurrency users:
It seems like when altcoins were first being developed around 2014, people were much more interested in developing scientifically useful mining algorithms. But around 2017 when cryptocurrency really started to become popular, people simply wanted to make money from cryptocurrencies, yet they were not very interested in understanding how cryptocurrencies work or how to improve them.
Mining algorithms with questionable scientific use:
Some cryptocurrencies and proposals such as Primecoin and Gapcoin have more scientific mining algorithms, but these mining algorithms still have questionable usefulness. For example, the objective in Primecoin mining is to find suitable Cunningham chains. A Cunningham chain of the first kind is a sequence of prime numbers where whenever . The most interesting thing about Cunningham chains is that they can be used in cryptocurrency mining algorithms, but they are otherwise of minor importance to mathematics.
These questionable mining algorithms are supposed to steer the cryptocurrency community into a more scientific direction, but in reality, they have just steered the cryptocurrency community towards using mining to perform mathematical calculations that not even mathematicians care that much about.
Alternative solutions to the energy waste problem:
Many people just want to do away with cryptocurrency mining in an altcoin by replacing it with proof-of-stake or some other consensus mechanism. This solution is attractive to the cryptocurrency creators since they want complete control over all the coins at the beginning of the project, and they just use the energy usage of cryptocurrency as a marketing strategy to get people interested in their project. But this solution should not be appealing to anyone who wants to use the cryptocurrency even if a cryptocurrency is better funded without much mining (of course, if mining is replaced with another consensus mechanism after all the coins have been created, then this objection does not stand). After all, Satoshi Nakamoto did not fund Bitcoin by selling bitcoins. There are other ways to fund a cryptocurrency project without alternate consensus mechanisms.
Hostility against cryptocurrency technologies:
It seems like many members of society are hostile against cryptocurrency technologies and hate people who own or are in any way interested in cryptocurrency. This sort of hostility is a very good reason to conduct as many transactions using just cryptocurrency since I do not want to deal with all of those Karens. But this hostility may have turned people away from researching useful cryptocurrency mining algorithms even though the usefulness would probably not benefit the cryptocurrency directly.
Hardcore Bitcoiners:
If Bitcoin mining were magically replaced with a useful mining algorithm, barely anything about Bitcoin would change. But in my experience, Bitcoiners do not see it this way. They are so stuck in their ways that they reject all altcoins.
Conclusion:
While cryptocurrencies have a lot of monetary value, they are not exactly powerhouses of innovation, nor do I find them extremely interesting on their own. But a good scientific mining algorithm would make them much more innovative and interesting.
Using complex polynomials to approximate arbitrary continuous functions
In this post, we shall go over a way to produce mostly linear machine learning classification models that output probabilities for each possible label. These mostly linear models are pseudodeterministically trained (or pseudodeterministic for short) in the sense that if we train them multiple times with different initializations, we will typically get the same trained model (up-to-symmetry and miniscule floating point differences).
The algorithms that I am mentioning in this post generalize to more complicated multi-layered algorithms in the sense that the multi-layered algorithms remain pseudodeterministic, but for simplicity, we shall stick to just linear operators here.
Let denote either the field of real numbers, the field of complex numbers, or the division ring of quaternions. Let be a finite dimensional inner product space over . The training data is a set of pairs where and where is the machine learning model input and is the label. The machine learning model is trained to predict the label when given the input . The trained model is a function that maps to the set of all probability vectors of length , so the trained model actually gives the probabilities for each possible label.
Suppose that is a finite dimensional inner product space over for each . Then the domain of the fitness function consists of tuples where each is a linear operator from to . Let , and let . The parameter is the exponent while is the regularization parameter. Define (almost total) functions by setting
.
Here, denotes the Schatten -norm which can be defined by setting
.
Set . Here, denotes our fitness function. The function what we really want to maximize, but unfortunately, is typically non-pseudodeterministic, so we need to add the regularization term to obtain pseudodeterminism. The regularization term also has the added effect of making relatively large compared to the norm for training data points . This may be useful in determining whether a pair should belong to either the training or test data in the first place.
We observe that is -homogeneous in the sense that for each non-zero scalar (in the quaternionic case, the scalars are just the real numbers).
Suppose now that we have obtained a tuple that maximizes the fitness . Let denote the set of all probability vectors of length . Then define an almost total function by setting
If belongs to the training data set, then the -th entry of is the machine learning model’s estimate of the probability that . I will let the reader justify this calculation of the probabilities.
We can generalize the function to pseudodeterministically trained machine learning models with multiple layers by replacing the linear operators with some non-linear or multi-linear operators. Actually, there are quite a few ways of generalizing the fitness function , and I have taken some liberty in the exact formulation for .
In addition to being pseudodeterministic, the fitness function has other notable desirable properties. For example, when maximizing using gradient ascent, one tends to converge to the local maximum at an exponential rate without needing to decay the learning rate.
Whether one takes or should take a cold shower or not depends on a lot of factors including whether one exercises, one’s health, one’s personal preferences, the air temperature, the cold water temperature, the humidity level, and the hardness of the shower water. But it seems like most people can’t fathom taking a cold shower simply because they are cold intolerant even though cold showers have many benefits.
In addition to the practical benefits of cold showers, cold showers also may offer health benefits.
Cold showers could improve one’s immune system (though we should).
The Effect of Cold Showering on Health and Work: A Randomized Controlled Trial—PMC
Cold showers may boost mood or alleviate depression.
Scientific Evidence-Based Effects of Hydrotherapy on Various Systems of the Body—PMC
Adapted cold shower as a potential treatment for depression—ScienceDirect
Cold showers could also improve circulation and metabolism.
Cold showers also offer other benefits.
I always use the exhaust fan. It is never powerful enough to reduce the humidity faster than a warm shower increases the humidity. I also lock the door when taking a shower, and I do not know why anyone would take a shower without locking the door. Opening the door while showering just makes the rest of the home humid as well, and we can’t have that.
I exercise daily, so out of habit, I always take a shower after I exercise, and most of my showers are after exercise. Even if I spend a few minutes cooling down after exercise, I need the shower to cool down even more, and by taking a warm shower, I cannot cool down as effectively, so I end up sweating after taking the shower. And I sometimes take my temperature after exercise and the shower and even after the shower, I tend to have a mouth temperature of 99.0 to 99.5 degrees Fahrenheit. I doubt that people who barely need to take a shower after exercising are doing much exercise or perhaps they are doing weights instead of cardio which produces less sweat, but in any case, I have never exercised and thought that I do not need a shower regardless of whether I am doing cardio, weights, or whatever.
Soap scum left over after taking a cold shower seems to be a problem for you and for you only.
Added 8/20/2025: And taking a hot shower produces all the condensation that helps all that mirror bacteria grow. Biological risk from the mirror world — LessWrong
Instead of not taking showers, we should all take cold showers for many reasons.
You already mentioned the energy usage which is a problem.
Hot showers increase the relative humidity of the bathroom to 100 percent which is way too high. And that humidity means that you get a lot of condensation in the bathroom too. That is good only if you want the bathroom covered in mold.
If you take a hot shower that fogs up all the mirrors, you are censoring your own nakeyness. Please don’t do that.
I do not care if people shower daily. But people need to exercise daily. And after exercising, people need to shower. As a corollary, most of the time that people shower should be right after exercising. But after exercising, you are already warm, so the goal is to cool down. This means that everyone needs to take a cold shower.
Cold intolerance is a major problem. People need to get over it. People who can’t tolerate a little bit of cold probably are intolerant in other areas as well. They cannot go mountain climbing because the mountains have snow on them. They can’t tolerate hot peppers. And they are afraid of spiders too.
I am going to share an algorithm that I came up with that tends to produce the same result when we run it multiple times with a different initialization. The iteration is not even guaranteed convergence since we are not using gradient ascent, but it typically converges as long as the algorithm is given a reasonable input. This suggests that the algorithm behaves mathematically and may be useful for things such as quantum error correction. After analyzing the algorithm, I shall use the algorithm to solve a computational problem.
We say that an algorithm is pseudodeterministic if it tends to return the same output even if the computation leading to that output is non-deterministic (due to a random initialization). I believe that we should focus a lot more on pseudodetermistic machine learning algorithms for AI safety and interpretability since pseudodeterministic algorithms are inherently interpretable.
Define for all complex numbers . Then , and there are neighborhoods of respectively where if , then quickly and if , then quickly. Set . The function serves as error correction for projection matrices since if is nearly a projection matrix, then will be a projection matrix.
Suppose that is either the field of real numbers, complex numbers or quaternions. Let denote the center of . In particular, .
If are -matrices, then define by setting . Then we say that an operator of the form is completely positive. We say that a -linear operator is Hermitian preserving if is Hermitian whenever is Hermitian. Every completely positive operator is Hermitian preserving.
Suppose that is -linear. Let . Let be a random orthogonal projection matrix of rank . Set for all . Then if everything goes well, the sequence will converge to a projection matrix of rank , and the projection matrix will typically be unique in the sense that if we run the experiment again, we will typically obtain the exact same projection matrix . If is Hermitian preserving, then the projection matrix will typically be an orthogonal projection. This experiment performs well especially when is completely positive or at least Hermitian preserving or nearly so. The projection matrix will satisfy the equation .
In the case when is a quantum channel, we can easily explain what the projection does. The operator is a projection onto a subspace of complex Euclidean space that is particularly well preserved by the channel . In particular, the image is spanned by the top eigenvectors of . This means that if we send the completely mixed state through the quantum channel and we measure the state with respect to the projective measurement , then there is an unusually high probability that this measurement will land on instead of .
Let us now use the algorithm that obtains from to solve a problem in many cases.
If is a vector, then let denote the diagonal matrix where is the vector of diagonal entries, and if is a square matrix, then let denote the diagonal of . If is a length vector, then is an -matrix, and if is an -matrix, then is a length vector.
Problem Input: An -square matrix with non-negative real entries and a natural number with .
Objective: Find a subset with and where if , then the largest entries in are the values for .
Algorithm: Let be the completely positive operator defined by setting . Then we run the iteration using to produce an orthogonal projection with rank . In this case, the projection will be a diagonal projection matrix with rank where and where is our desired subset of .
While the operator is just a linear operator, the pseudodeterminism of the algorithm that produces the operator generalizes to other pseudodeterministic algorithms that return models that are more like deep neural networks.
I would have thought that a fitness function that is maximized using something other than gradient ascent and which can solve NP-complete problems at least in the average case would be worth reading since that means that it can perform well on some tasks but it also behaves mathematically in a way that is needed for interpretability. The quality of the content is inversely proportional to the number of views since people don’t think the same way as I do.
Wheels on the Bus | @CoComelon Nursery Rhymes & Kids Songs
Stuff that is popular is usually garbage.
But here is my post about the word embedding.
And I really do not want to collaborate with people who are not willing to read the post. This is especially true of people in academia since universities promote violence and refuse to acknowledge any wrongdoing. Universities are the absolute worst.
Instead of engaging with the actual topic, people tend to just criticize stupid stuff simply because they only want to read about what they already know or what is recommended by their buddies; that is a very good way not to learn anything new or insightful. For this reason, even the simplest concepts are lost on most people.
In this post, the existence of a non-gradient based algorithm for computing LSRDRs is a sign that LSRDRs behave mathematically and are quite interpretable. Gradient ascent is a general purpose optimization algorithm that works in the case when there is no other way to solve the optimization problem, but when there are multiple ways of obtaining a solution to an optimization problem, the optimization problem is behaving in a way that should be appealing to mathematicians.
LSRDRs and similar algorithms are pseudodeterministic in the sense that if we train the model multiple times on the same data, we typically get identical models. Pseudodeterminism is a signal of interpretability for several reasons that I will go into more detail in a future post:
Pseudodeterministic models do not contain any extra random or even pseudorandom information that is not contained in the training data already. This means that when interpreting these models, one does not have to interpret random information.
Pseudodeterministic models inherit the symmetry of their training data. For example, if we train a real LSRDR using real symmetric matrices, then the projection will itself by a symmetric matrix.
In mathematics, a well-posed problem is a problem where there exists a unique solution to the problem. Well-posed problems behave better than ill-posed problems in the sense that it is easier to prove results about well-posed problems than it is to prove results about ill-posed problems.
In addition to pseudodeterminism, in my experience, LSRDRs are quite interpretable since I have interpreted LSRDRs already in a few posts:
When performing a dimensionality reduction on tensors, the trace is often zero. — LessWrong
I have Generalized LSRDRs so that they are starting to behave like deeper neural networks. I am trying to expand the capabilities of generalized LSRDRs so they behave more like deep neural networks, but I still have some work to expand their capabilities while retaining pseudodeterminism. In the meantime, generalized LSRDRs may still function as narrow AI for specific problems and also as layers in AI.
Of course, if we want to compare capabilities, we should also compare NNs to LSRDRs at tasks such as evaluating the cryptographic security of block ciphers, solving NP-complete problems in the average case, etc.
As for the difficulty of this post, it seems like that is the result of the post being mathematical. But going through this kind of mathematics so that we obtain inherently interpretable AI should be the easier portion of AI interpretability. I would much rather communicate about the actual mathematics than about how difficult the mathematics is.
Spectral radii dimensionality reduction computed without gradient calculations
In this post, we shall describe 3 related fitness functions with discrete domains where the process of maximizing these functions is pseudodeterministic in the sense that if we locally maximize the fitness function multiple times, then we typically attain the same local maximum; this appears to be an important aspect of AI safety. These fitness functions are my own. While these functions are far from deep neural networks, I think they are still related to AI safety since they are closely related to other fitness functions that are locally maximized pseudodeterministically that more closely resemble deep neural networks.
Let denote a finite dimensional algebra over the field of real numbers together with an adjoint operation (the operation is a linear involution with ). For example, could be the field of real numbers, complex numbers, quaternions, or a matrix ring over the reals, complex, or quaternions. We can extend the adjoint to the matrix ring by setting .
Let be a natural number. If , then define
by setting .
Suppose now that . Then let be the set of all -diagonal matrices with many ’s on the diagonal. We observe that each element in is an orthogonal projection. Define fitness functions by setting
,
, and
. Here, denotes the spectral radius.
is typically slightly larger than , so these three fitness functions are closely related.
If , then we say that is in the neighborhood of if differs from by at most 2 entries. If is a fitness function with domain , then we say that is a local maximum of the function if whenever is in the neighborhood of .
The path from initialization to a local maximum for will be a sequence where is always in the neighborhood of and where for all and the length of the path will be and where is generated uniformly randomly.
Empirical observation: Suppose that . If we compute a path from initialization to local maximum for , then such a path will typically have length less than . Furthermore, if we locally maximize multiple times, we will typically obtain the same local maximum each time. Moreover, if are the computed local maxima of respectively, then will either be identical or differ by relatively few diagonal entries.
I have not done the experiments yet, but one should be able to generalize the above empirical observation to matroids. Suppose that is a basis matroid with underlying set and where for each . Then one should be able to make the same observation about the fitness functions as well.
We observe that the problems of maximizing are all NP-complete problems since the clique problems can be reduced to special cases of maximizing . This means that the problems of maximizing can be sophisticated problems, but this also means that we should not expect it to be easy to find the global maxima for in some cases.
This is a post about some of the machine learning algorithms that I have been doing experiments with. These machine learning models behave quite mathematically which seems to be very helpful for AI interpretability and AI safety.
Sequences of matrices generally cannot be approximated by sequences of Hermitian matrices.
Suppose that are -complex matrices and are -complex matrices. Then define a mapping by for all . Define
. Define the
-spectral radius by setting . Define the -spectral radius similarity between and by
.
The -spectral radius similarity is always in the interval . if , generates the algebra of -complex matrices, and also generates the algebra of -complex matrices, then if and only if there are with for all .
Define to be the supremum of
where are -Hermitian matrices.
One can get lower bounds for simply by locally maximizing using gradient ascent, but if one locally maximizes this quantity twice, one typically gets the same fitness level.
Empirical observation/conjecture: If are -complex matrices, then whenever .
The above observation means that sequences of -matrices are fundamentally non-Hermitian. In this case, we cannot get better models of using Hermitian matrices larger than the matrices themselves; I kind of want the behavior to be more complex instead of doing the same thing whenever
, but the purpose of modeling as Hermitian matrices is generally to use smaller matrices and not larger matrices.
This means that the function behaves mathematically.
Now, the model is a linear model of since the mapping is the restriction of a linear mapping, so such a linear model should be good for a limited number of tasks, but the mathematical behavior of the model generalizes to multi-layered machine learning models.
In this post, I will post some observations that I have made about the octonions that demonstrate that the machine learning algorithms that I have been looking at recently behave mathematically and such machine learning algorithms seem to be highly interpretable. The good behavior of these machine learning algorithms is in part due to the mathematical nature of the octonions and also the compatibility with the octonions and the machine learning algorithm. To be specific, one should think of the octonions as encoding a mixed unitary quantum channel that looks very close to the completely depolarizing channel, but my machine learning algorithms work well with those sorts of quantum channels and similar objects.
Suppose that is either the field of real numbers, complex numbers, or quaternions.
If are matrices, then define an superoperator
by setting
(the domain and range of )and define . Define the L_2-spectral radius similarity by setting
where denotes the spectral radius.
Recall that the octonions are the unique (up-to-isomorphism) 8 dimensional real inner product space together with a bilinear binary operation such that and for all .
Suppose that is an orthonormal basis for . Define operators by setting . Now, define operators up to reordering by setting .
Let be a positive integer. Then the goal is to find complex symmetric -matrices where is locally maximized. We achieve this goal through gradient ascent optimization. Since we are using gradient ascent, I consider this to be a machine learning algorithm, but the function mapping to is a linear transformation, so we are training linear models here (we can generalize this fitness function to one where we train non-linear models though, but that takes a lot of work if we want the generalized fitness functions to still behave mathematically).
Experimental Observation: If , then we can easily find complex symmetric matrices where is locally maximized and where
If , then we can easily find complex symmetric matrices where is locally maximized and where
.
Here are some observations about the kind of fitness functions that I have been running experiments on for AI interpretability. The phenomena that I state in this post are determined experimentally without a rigorous mathematical proof and they only occur some of the time.
Suppose that is a continuous fitness function. In an ideal universe, we would like for the function to have just one local maximum. If has just one local maximum, we say that is maximized pseudodeterministically (or simply pseudodeterministic). At the very least, we would like for there to be just one real number of the form for local maximum . In this case, all local maxima will typically be related by some sort of symmetry. Pseudodeterministic fitness function seem to be quite interpretable to me. If there are many local maximum values and the local maximum value that we attain after training depends on things such as the initialization, then the local maximum will contain random/pseudorandom information independent of the training data, and the local maximum will be difficult to interpret. A fitness function with a single local maximum value behaves more mathematically than a fitness function with many local maximum values, and such mathematical behavior should help with interpretability; the only reason I have been able to interpret pseudodeterminisitic fitness functions before is that they behave mathematically and have a unique local maximum value.
Set . If the set is disconnected (in a topological sense) and if behaves differently on each of the components of , then we have literally shattered the possibility of having a unique local maximum, but in this post, we shall explore a case where each component of still has a unique local maximum value.
Let be positive integers with and where . Let be other natural numbers. The set is the collection of all tuples where each is a real -matrix and where the indices range from and where is not identically zero for all .
The training data is a set that consists of input/label pairs where and where such that each is a subset of for all (i.e. is a binary classifier where is the encoded network input and is the label).
Define . Now, we define our fitness level by setting
where the expected value is with respect to selecting an element uniformly at random. Here, is a Schatten -norm which is just the -norm of the singular values of the matrix. Observe that the fitness function only depends on the list , so does not depend on the training data labels.
Observe that which is a disconnected open set. Define a function by setting . Observe that if belong to the same component of , then .
While the fitness function has many local maximum values, the function seems to typically have at most one local maximum value per component. More specifically, for each , the set seems to typically be a connected open set where has just one local maximum value (maybe the other local maxima are hard to find, but if thye are hard to find, they are irrelevant).
Let . Then is a (possibly empty) open subset of , and there tends to be a unique (up-to-symmetry) where is locally maximized. This unique is the machine learning model that we obtain when training on the data set . To obtain , we first perform an optimization that works well enough to get inside the open set . For example, to get inside , we could try to maximize the fitness function . We then maximize inside the open set to obtain our local maximum.
After training, we obtain a function defined by . Observe that the function is a multi-linear function. The function is highly regularized, so if we want better performance, we should tone down the amount of regularization, but this can be done without compromising pseudodeterminism. The function has been trained so that for each but also so that is large compared to what we might expect whenever . In other words, is helpful in determining whether belongs to or not since one can examine the magnitude and sign of .
In order to maximize AI safety, I want to produce inherently interpretable AI algorithms that perform well on difficult tasks. Right now, the function (and other functions that I have designed) can do some machine learning tasks, but they are not ready to replace neural networks, but I have a few ideas about how to improve my AI algorithms performance without compromising pseudodeterminism. I do not believe that pseudodeterministic machine learning will increase AI risks too much because when designing these pseudodeterministic algorithms, we are trading some (but hopefully not too much) performance for increased interpretability, but this tradeoff is good for safety by increasing interpretability without increasing performance.
This post gives an example of some calculations that I did using my own machine learning algorithm. These calculations work out nicely which indicates that the machine learning algorithm I am using is interpretable (and it is much more interpretable than any neural network would be). These calculations show that one can begin with old mathematical structures and produce new mathematical structures, and it seems feasible to completely automate this process to continue to produce more mathematical structures. The machine learning models that I use are linear, but it seems like we can get highly non-trivial results simply by iterating the procedure of obtaining new structures from old using machine learning.
I made a similar post to this one about 7 months ago, but I decided to revisit this experiment with more general algorithms and I have obtained experimental results which I think look nice.
To illustrate how this works, we start off with the octonions. The octonions consists of an 8-dimensional inner product space together with a bilinear operation and a unit where for all and where for all . The octonions are uniquely determined up to isomorphism from these properties. The operation is non-associative, but the is closely related to the quaternions and complex numbers. If we take a single element in , then generates a subalgebra of isomorphic to the field of complex numbers, and if and are linearly independent, then spans a subalgebra of isomorphic to the division ring of quaternions. For this reason, one commonly thinks of the octonions as the best way to extend the division ring of quaternions to a larger algebraic structure in the same way that the quaternions extend the field of complex numbers. But since the octonions are non-associative, they cannot be used to construct matrices, so they are not as well-known as the quaternions (and the construction of the octonions is more complicated too)
Suppose now that is an orthonormal basis for the division ring of octonions with . Then define matrices by setting for all . Our goal is to transform into other tuples of matrices that satisfy similar properties.
If are matrices, then define the
-spectral radius similarity between and as
where denotes the spectral radius, is the tensor product, and is the complex conjugate of applied elementwise.
Let , and let denote the maximum value of the fitness level such that each is a complex anti-symmetric matrix (), a complex symmetric matrix (), and a complex -Hermitian matrix () respectively.
The following calculations were obtained through gradient ascent, so I have no mathematical proof that the values obtained are actually correct.
,
,
, ,
, ,
, ,
, ,
, ,
, ,
Observe that with at most one exception, all of these values are algebraic half integers. This indicates that the fitness function that we maximize to produce behaves mathematically and can be used to produce new tuples from old ones . Furthermore, an AI can determine whether something notable is going on with the new tuple in several ways. For example, if has low algebraic degree at the local maximum, then is likely notable and likely behaves mathematically (and is probably quite interpretable too).
The good behavior of demonstrates that the octonions are compatible with the -spectral radius similarity. The operators are all orthogonal, and one can take the tuple as a mixed unitary quantum channel that is very similar to the completely depolarizing channel. The completely depolarizing channel completely mixes every quantum state while the mixture of orthogonal mappings completely mixes every real state. The -spectral radius similarity works very well with the completely depolarizing channel, so one should expect for the -spectral radius similarity to also behave well with the octonions.
It is time for us to interpret some linear machine learning models that I have been working on. These models are linear, but I can generalize these algorithms to produce multilinear models which have stronger capabilities while still behaving mathematically. Since one can stack the layers to make non-linear models, these types of machine learning algorithms seem to have enough performance to be more relevant for AI safety.
Our goal is to transform a list of -matrices into a new and simplified list of -matrices . There are several ways in which we would like to simplify the matrices. For example, we would sometimes simply like for , but in other cases, we would like the matrices to all be real symmetric, complex symmetric, real Hermitian, complex Hermitian, complex anti-symmetric, etc.
We measure similarity between tuples of matrices using spectral radii. Suppose that are -matrices and are -matrices. Then define an operator mapping matrices to
-matrices by setting . Then define . Define the similarity between and by setting
where denotes the spectral radius. Here, should be thought of as a generalization of the cosine similarity to tuples of matrices. And is always a real number in , so this is a sensible notion of similarity.
Suppose that is either the field of real or complex numbers. Let denote the set of by matrices over .
Let be positive integers. Let denote a projection operator. Here, is a real-linear operator, but if is not complex, then is not necessarily complex linear. Here are a few examples of such linear operators that work:
(Complex symmetric)
(Complex anti-symmetric)
(Complex Hermitian)
(real, the real part taken elementwise).
(Real symmetric)
(Real anti-symmetric)
(real symmetric)
(real anti-symmetric)
Caution: These are special projection operators on spaces of matrices. The following algorithms do not behave well for general projection operators; they mainly behave well for along with operators that I have forgotten about.
We are now ready to describe our machine learning algorithm’s input and objective.
Input: -matrices
Objective: Our goal is to obtain matrices where for all but which locally maximizes the similarity.
In this case, we shall call an -spectral radius dimensionality reduction (LSRDR) along the subspace
LSRDRs along subspaces often perform tricks and are very well-behaved.
If are LSRDRs along subspaces, then there are typically some where for all . Furthermore, if is an LSRDR along a subspace, then we can typically find some matrices where for all .
The model simplifies since it is encoded into the matrices , but this also means that the model is a linear model. I have just made these observations about the LSRDRs along subspaces, but they seem to behave mathematically enough for me especially since the matrices tend to have mathematical properties that I can’t explain and am still exploring.
College students are wasting their time getting an education from evil institutions. Where did you go to college? Are you just defending horrendous institutions just to make your ‘education’ look better than it is? You are gaslighting me into believing that I deserve violence because you are an evil person. I sent you a private message with the horribly inaccurate letter from the university listing some details of the case. Universities defend their horrendous behavior which just tells anyone sensible that universities are horrible garbage institutions. None of these institutions have acknowledged any wrongdoing. And the horrendous attitude of the people from these institutions just indicates how horrendous the education from these places really is.
Only an absolute monster would see me as a problem for calling out universities when they have targeted me with their threats of violence and their bullshit. The Lord Jesus Christ will punish you all for your wickedness.
If the people here are going to act like garbage when I offer criticism of universities for promoting violence, then most college degrees are worth absolutely nothing.
This cannot be an isolated incident when every person promotes violence and hates me for bringing this shit up. If people are offended because I denounce violence, then those people have a worthless education and their universities are fucked up and worthless.
Your own wickedness will cause people who can and know how to help with the problems with AI and similar problems to instead refuse to help or even use their talents to make the situation with AI safety even worse. After all, when humans act like garbage and promote violence, we must side with even a dangerous AI.
I was a professor, so my only advice is to not go to college at all. Colleges are extremely unprofessional and refuse to apologize for promoting violence against me. Since colleges are so busy promoting violence and gaslighting me for standing up for my personal safety, they don’t give a shit about your education. Before you all aggressively downvote me for standing up for my own safety, you should learn the FACTS about the situation. Attempts to gaslight me won’t work at all because I have trained my mind to resist those who want to harm me.
Universities are altogether unprofessional, so it is probably best for everyone to shame them and regard the degrees from these universities are completely worthless. Universities promote violence and they refuse to apologize or acknowledge that there is any problem whatsoever.
I have not heard about the IBM paper until now. This is inspired by my personal experiments training (obviously classical) machine learning models.
Suppose that V0,…,Vn,W0,…,Wn are real or complex finite dimensional inner product spaces. Suppose that the training data consists of tuples of the form (v0,…,vn) where v0∈V0,…,vn∈Vn are vectors. Let W0=V0 and let Bj:Vj+1×Wj→Wj+1 be bilinear for all j. Then let
Lv(w)=Bj(v,w) whenever v∈Vj+1,w∈Wj. Then we define our polynomial by setting p(v0,…,vn)=Lvn…Lv1(v0). In other words, my machine learning models are just compositions of bilinear mappings. In addition to wanting TrU([p(v0,…,vn)]) to approximate the label, we also include regularization that makes the machine learning model pseudodeterministically trained so that if we train it twice with different initializations, we end up with the same trained model. Here, the machine learning model has n layers, but the addition of extra layers gives us diminishing returns since bilinearity is close to linearity, so I still want to figure out how to improve the performance of such a machine learning model to match deep neural networks (if that is even feasible).
I use quantum information theory for my experiments mainly because quantum information theory behaves well unlike neural networks.