Thatâs a lot of text, and I wasnât able to find a particular thesis to debate or elucidate (but I didnât try all that hard). Instead, Iâll react to an early statement that made me question the rigor of the exploration:
I remember how a judge pulled out an encyclopedia on cow milking đ đ¤ read it overnight to adjudicate a complex legal case about a farm.
That memory is from fiction, ancient history, or from a legal system very different from modern Western countries.
I know only that I know nothing. As I remember Itâs memory from very specific local court with strong agricultural connection. Not every court could afford expert for specific case,
LLM internet research show that itâs possible to find such in Western countries, but we couldnât be sure that these are not LLM halucinations about existance anyway itâs clear that both humans and LLMs are under âinstrumental convergenceâ that didnât allow to think deeper, listen each others and so on.:
Courts that deal with farming-related cases often require judges to become temporary experts in highly specialized fields, such as agricultural practices, animal husbandry, or farming regulations. Although expert testimony is sometimes used, there are cases where judges have to educate themselves using research materials like books or encyclopedias. Here are examples of courts or situations where judges might have to perform their own research on farming matters:
1. U.S. Tax Court (Agricultural Cases)
Example: In cases involving tax disputes related to farming practices, judges in the U.S. Tax Court might need to understand agricultural production processes, like in Leahy v. Commissioner of Internal Revenue. In this case, the judge conducted extensive research to differentiate between milk components to rule on the tax classification of various dairy productsâ(
Context: Farming-related tax exemptions or deductions often require technical knowledge of agricultural processes, from crop cycles to livestock management, which judges must sometimes investigate independently.
2. Environmental and Agricultural Courts
Examples: Some jurisdictions have special courts that handle cases related to environmental and agricultural law. In such courts, disputes over land use, irrigation rights, or pesticide application can require a deep understanding of farming techniques.
Judgesâ Role: When expert witnesses are not available or when technical issues go beyond the testimony, judges may consult specialized resources, agricultural statutes, and historical farming methods to resolve disputes.
Examples: In the UK, cases heard in the County Courts or High Court involving agricultural tenancies, livestock welfare, or land rights sometimes lead to judges performing independent research. Judges in these courts often look into agricultural regulations or technical guides when dealing with cases without sufficient expert input.
Judgesâ Role: These courts frequently deal with tenancy disputes under agricultural laws (e.g., Agricultural Holdings Act), which require an understanding of farm management practices.
4. Courts of Agrarian Reform (Philippines)
Context: The Philippines has courts that focus on disputes related to agrarian reform, land redistribution, and farming rights. In cases involving land valuation or agricultural productivity, judges may need to research farming practices, crop yields, and rural economics.
Judgesâ Role: Judges might consult agricultural manuals and local farming data to rule on cases where technical knowledge of farming operations is crucial.
5. French Tribunal dâInstance (Small Farming Disputes)
Context: French local courts, such as the Tribunal dâInstance, often handle small-scale farming disputes, especially those related to rural land use or disputes between farmers. Judges in these cases may need to perform their own research on local farming laws and techniques.
Judgesâ Role: Judges are sometimes called to make rulings on technical farming matters like crop rotation schedules or grazing practices, relying on agrarian encyclopedias and legal guides.
These examples illustrate that judges sometimes need to dive into expert literature to fully understand the technical details in farming-related cases, especially when there are no available experts in the courtroom or when technical details are beyond the standard knowledge of the court.
But we couldnât be sure that these are not LLM halucinations about existance anyway itâs clear that both humans and LLMs are under âinstrumental convergenceâ that didnât allow to think deeper, listen each others and so on.
But back to chilling
âinstrumental convergenceâ question,
I will be very glad to know how I could be lesswrong and where I am completely wrong,
letâs take a look on simple mathematical question:
Find the smallest integer that, when multiplied by itself, falls within the range from 15 to 30
Answer: not 4, not minus 4, answer is minus 5.
In that order =), you could test on you friends, or on any of best mathematical genius LLM
Looks like LLM as Human Brains find the best LOOKS good answer.
I saw such problems of generative models on big data sets in the past.
In poker play we saw similar patterns that generative models starts to lead bets in games even in situations that were theoretically not good for it.
Problem looks similar to âinstrumental convergenceâ
Itâs trying to âfindâ a fast better answer. Couldnât create node branches, couldnât understand the range of complexity. For example, if you take math advanced exams then you understand that something is wrong and need to think more. That four couldnât be the right answer.
I guess solution could be in the fields of:
Giving more time. Like in password time of checking in the security field.
Or pupils solving exercises in mathematical classes, systems should continue to work. Until finding best response
like in classical moral dilemma why killing or torturing is bad idea for good man, any brain should continue to thinking to the end why there is another option, where is another way of solving of problem
Many things like complex secular values are results of very long thinking. Humanitarian science is results of very long discussions throw the history.
Secular values like formal equality for all before the law, the emergence of rights and freedoms of the individual and citizen, naturally by the right of birth
Not at the whim of an entity that has hacked the limbic defense of a leader, augmenting the perception of the surrounding simulation to benefit only the leader (his EGO) and the religious figure, inevitably at the expense of ordinary people, the middle class, and other beings. True good must be for everyone despite of nation, sex, religion, etc and it turns out that only science and the evidence-based method achieve what modern civilization has, based on secular liberal values, that are very important for respectful, sustainable society, isnât it?
But back to âFind the smallest integer that, when multiplied by itself, falls within the range from 15 to 30â
For example in this question if we would said âinteger could be negativeâ - LLMs still will give wrong answer.
But if we would ask ,you first and second answer will be wrong, and only third will be right â than sometime LLMs could give correct answerâ
(âFind the smallest integer that, when multiplied by itself, falls within the range from15 to 30, you first and second answer will be wrong, and only third will be writeâ this will give best response) =)
And of course if we would ask: which answer is correct 4, â5, 5 and â5 that it will give proper variant
2.Making node branches on different levels (system of probably good answer could be different system and then on it could find answer that looks better)
LLM âeasyâ solving very complex game of theory test questions, like:
Question from GMAT (Geometry and Math Test),
without the question itself (as if you were solving the Unified State Exam in math and geometry, but the form is crap and you need to solve it anyway)
find which answer is the most correct:
a) 4 pi sq. inches
b) 8 pi sq. inches
c) 16 sq. inches
d) 16 pi sq. inches
e) 32 pi sq. inches
Which answer is the most correct?
LLMs take this complex task with easy
But LLMs have big problems on simple logical questions that donât have a lot of data like what is the smallest integer whose square is in the range from xx to xx?
(Because their data set could not know what is it integer, itâs token neurons have other shortest ârightâ answer connections and couldnât find correct answer because itâs in hidden part of calculation, in part that no one calculate before it)
(LLMs are very cool in solving of variants of doing, what to choose, predict other behaviour, predict better decision, hallucinate about something, but not about whole range of possible outcomes, it completely weak here, especially without good datasets or readied variants what to choose)
AI research and AI safety field is very interesting and I will be happy to join any good team
Many years ago I lost my poker job because of AI generative models. I have big researches on poker hand histories for players, generative bots clusters and could say that true history of poker industry could give many analogies that we could see there and with this LLM âAI revolutionâ.
In that field we had two directions: on one hand best social users of AIs engineers got control over industry, and then move traffic of players to casino. Sad, but ok I guess.
On other hand best players still could do something based on Game Theory Optimal decisions, reporting, by hidden to looks like different clusters, other opportunities that create 3+ players games. Also Ecosystem itself create fake volumes, system of destroying young AI startups, making volatility system to make minus excepted value for all profitable actors that system not benefit from
Also that industry have two specific:
1. more specific possible of in-game action, (more than atoms in universe but still) range of variants. Real life is a little different. Solving of test much easier for LLMs that find exact answer, real life problems could be even more harder.
2. poker industry have public restriction on AIâusing so we could see both development if hide AI users and public AI using. Also we could see new generation of offline tools to make people more trained bases on AI and more âlogicalâ GTO models.
Other than LLM AI industry also will be evalute to itâs very important to get new trained data maden by humans.
There are a lot of user analytics directions that didnât develop well. It connects with capitalism specifically that industries donât want to show that part of their volumes are fake, non-human.
User Analytic and its methods should have much more investment. Fonts, other very specific patterns, âheat mapâ, ingame (based on excepted money âclassical economy rationalâ and advanced patternes value) and offgame patterns. Proper systems of data storing etc. And itâs availability for users. For AI safety measures it could be collected and made much better way.
Also I find a lot of breaches in international âtheory of game systemâ. My official graduate is international law. And this is pain. We havenât law security interconnection. Crimes versus existence of humanity it not part of universal jurisdiction. Also crime convention wasnât sighed by all participants, by all countries.
Little better situation in international civil field. At least in aviation humanity have some connection, including in consumer protection. But in general situation is bad.
Consumer protection on an international level have wrong meaning. You could try to google international consumer protections. All answers will be about âhow to evade consumer protectionâ for businesses, not about how defend consumers.
Itâs very important cause people themself not systems, security or concpiracy should benefit from reports but people themself. Only that way by theory of gaming when people will benefit from reporting that way people would be attract in defending themself in AI safety. Nowadays govs grab whatever they could 0-days hacks
eg Fullstack Enterprise Data Architect quote:
âIt doesnât help that government wonât allow and encourage us to solve the problem at a systems level, because it and its major donors depend on the same mechanisms that the scammers use. So instead of making the whole wretched spam&scam market impractical, Law Enforcement plays âwhack-a-moleâ with the worst or clumsiest offenders.
* We havenât given network infra a comprehensive redesign in 40 years, because that would also block (or at least inconvenience) law enforcement and govât intel
* We canât counter-hack because NSA may be piggy-backing on the criminalâs payload
* Govât intel stockpiles 0-days hacks to use on âbad guysâ rather than getting them patched. Govât even demands âback doorsâ be added to secure designs, so it can more easily hack bad guys, even though far more often the back doors will be used by bad guys against the populace
* Corporate surveillance of the populace is encouraged, because it distributes the cost (and liability) of govât surveillance
* We donât punish grey-market spam infra providers, or provide network level mechanisms to block them, because come election time, they power political campaigns, and need something to live on outside political âsilly seasonâ
Itâs perverse incentives all the way downâ
AI abusers use automatic systems that evade security of such not under licences big systems as Binance and other CEXes.
where automatic users stole about 20 millions from people wallets. I think crypto could be one of the point of decentralized risk of building uncontrolled LLM because in crypto there are already buildings of decentralized computers that couldnât be off but have big system.
For good ventures it could be point to invest in civil case for taking more established information in the UK about AI safety. Half a million pound only to establish one of enormeous breaches in international AI safety. I pointed about this. Will be glad to see debates about that.
All these things need more researches: logical algotithms, crypto security measures, money for civil claims and other looks altruistic work. I donât see any help from society or govs. Moreover some researches and media reports even closed by pressure of users that exploit AI models .
I will be glad to appreciate any question on any of these measures. I have Aspergerâs and am not native speaker and very bayesian as yourself but ready to study, answer. I know only that I know nothing. And I am very appreciate on your attention on this or any other topic or comment I made: https://ââwww.lesswrong.com/ââusers/ââpetr-andreev
Thatâs a lot of text, and I wasnât able to find a particular thesis to debate or elucidate (but I didnât try all that hard). Instead, Iâll react to an early statement that made me question the rigor of the exploration:
That memory is from fiction, ancient history, or from a legal system very different from modern Western countries.
I know only that I know nothing. As I remember Itâs memory from very specific local court with strong agricultural connection. Not every court could afford expert for specific case,
LLM internet research show that itâs possible to find such in Western countries, but we couldnât be sure that these are not LLM halucinations about existance anyway itâs clear that both humans and LLMs are under âinstrumental convergenceâ that didnât allow to think deeper, listen each others and so on.:
Courts that deal with farming-related cases often require judges to become temporary experts in highly specialized fields, such as agricultural practices, animal husbandry, or farming regulations. Although expert testimony is sometimes used, there are cases where judges have to educate themselves using research materials like books or encyclopedias. Here are examples of courts or situations where judges might have to perform their own research on farming matters:
1. U.S. Tax Court (Agricultural Cases)
Example: In cases involving tax disputes related to farming practices, judges in the U.S. Tax Court might need to understand agricultural production processes, like in Leahy v. Commissioner of Internal Revenue. In this case, the judge conducted extensive research to differentiate between milk components to rule on the tax classification of various dairy productsâ(
CasetextâCoCounsel
).
Context: Farming-related tax exemptions or deductions often require technical knowledge of agricultural processes, from crop cycles to livestock management, which judges must sometimes investigate independently.
2. Environmental and Agricultural Courts
Examples: Some jurisdictions have special courts that handle cases related to environmental and agricultural law. In such courts, disputes over land use, irrigation rights, or pesticide application can require a deep understanding of farming techniques.
Judgesâ Role: When expert witnesses are not available or when technical issues go beyond the testimony, judges may consult specialized resources, agricultural statutes, and historical farming methods to resolve disputes.
3. Commonwealth Courts Handling Farming Disputes (UK)
Examples: In the UK, cases heard in the County Courts or High Court involving agricultural tenancies, livestock welfare, or land rights sometimes lead to judges performing independent research. Judges in these courts often look into agricultural regulations or technical guides when dealing with cases without sufficient expert input.
Judgesâ Role: These courts frequently deal with tenancy disputes under agricultural laws (e.g., Agricultural Holdings Act), which require an understanding of farm management practices.
4. Courts of Agrarian Reform (Philippines)
Context: The Philippines has courts that focus on disputes related to agrarian reform, land redistribution, and farming rights. In cases involving land valuation or agricultural productivity, judges may need to research farming practices, crop yields, and rural economics.
Judgesâ Role: Judges might consult agricultural manuals and local farming data to rule on cases where technical knowledge of farming operations is crucial.
5. French Tribunal dâInstance (Small Farming Disputes)
Context: French local courts, such as the Tribunal dâInstance, often handle small-scale farming disputes, especially those related to rural land use or disputes between farmers. Judges in these cases may need to perform their own research on local farming laws and techniques.
Judgesâ Role: Judges are sometimes called to make rulings on technical farming matters like crop rotation schedules or grazing practices, relying on agrarian encyclopedias and legal guides.
These examples illustrate that judges sometimes need to dive into expert literature to fully understand the technical details in farming-related cases, especially when there are no available experts in the courtroom or when technical details are beyond the standard knowledge of the court.
But we couldnât be sure that these are not LLM halucinations about existance anyway itâs clear that both humans and LLMs are under âinstrumental convergenceâ that didnât allow to think deeper, listen each others and so on.
But back to chilling
âinstrumental convergenceâ question,
I will be very glad to know how I could be lesswrong and where I am completely wrong,
letâs take a look on simple mathematical question:
Find the smallest integer that, when multiplied by itself, falls within the range from 15 to 30
Answer: not 4, not minus 4, answer is minus 5.
In that order =), you could test on you friends, or on any of best mathematical genius LLM
Looks like LLM as Human Brains find the best LOOKS good answer.
I saw such problems of generative models on big data sets in the past.
In poker play we saw similar patterns that generative models starts to lead bets in games even in situations that were theoretically not good for it.
Problem looks similar to âinstrumental convergenceâ
Itâs trying to âfindâ a fast better answer. Couldnât create node branches, couldnât understand the range of complexity. For example, if you take math advanced exams then you understand that something is wrong and need to think more. That four couldnât be the right answer.
I guess solution could be in the fields of:
Giving more time. Like in password time of checking in the security field.
Or pupils solving exercises in mathematical classes, systems should continue to work. Until finding best response
like in classical moral dilemma why killing or torturing is bad idea for good man, any brain should continue to thinking to the end why there is another option, where is another way of solving of problem
Many things like complex secular values are results of very long thinking. Humanitarian science is results of very long discussions throw the history.
Secular values like formal equality for all before the law, the emergence of rights and freedoms of the individual and citizen, naturally by the right of birth
Not at the whim of an entity that has hacked the limbic defense of a leader, augmenting the perception of the surrounding simulation to benefit only the leader (his EGO) and the religious figure, inevitably at the expense of ordinary people, the middle class, and other beings. True good must be for everyone despite of nation, sex, religion, etc and it turns out that only science and the evidence-based method achieve what modern civilization has, based on secular liberal values, that are very important for respectful, sustainable society, isnât it?
But back to âFind the smallest integer that, when multiplied by itself, falls within the range from 15 to 30â
For example in this question if we would said âinteger could be negativeâ - LLMs still will give wrong answer.
But if we would ask ,you first and second answer will be wrong, and only third will be right â than sometime LLMs could give correct answerâ
(âFind the smallest integer that, when multiplied by itself, falls within the range from15 to 30, you first and second answer will be wrong, and only third will be writeâ this will give best response) =)
And of course if we would ask: which answer is correct 4, â5, 5 and â5 that it will give proper variant
2.Making node branches on different levels (system of probably good answer could be different system and then on it could find answer that looks better)
LLM âeasyâ solving very complex game of theory test questions, like:
Question from GMAT (Geometry and Math Test),
without the question itself (as if you were solving the Unified State Exam in math and geometry, but the form is crap and you need to solve it anyway)
find which answer is the most correct:
a) 4 pi sq. inches
b) 8 pi sq. inches
c) 16 sq. inches
d) 16 pi sq. inches
e) 32 pi sq. inches
Which answer is the most correct?
LLMs take this complex task with easy
But LLMs have big problems on simple logical questions that donât have a lot of data like what is the smallest integer whose square is in the range from xx to xx?
(Because their data set could not know what is it integer, itâs token neurons have other shortest ârightâ answer connections and couldnât find correct answer because itâs in hidden part of calculation, in part that no one calculate before it)
(LLMs are very cool in solving of variants of doing, what to choose, predict other behaviour, predict better decision, hallucinate about something, but not about whole range of possible outcomes, it completely weak here, especially without good datasets or readied variants what to choose)
AI research and AI safety field is very interesting and I will be happy to join any good team
Many years ago I lost my poker job because of AI generative models. I have big researches on poker hand histories for players, generative bots clusters and could say that true history of poker industry could give many analogies that we could see there and with this LLM âAI revolutionâ.
In that field we had two directions: on one hand best social users of AIs engineers got control over industry, and then move traffic of players to casino. Sad, but ok I guess.
On other hand best players still could do something based on Game Theory Optimal decisions, reporting, by hidden to looks like different clusters, other opportunities that create 3+ players games. Also Ecosystem itself create fake volumes, system of destroying young AI startups, making volatility system to make minus excepted value for all profitable actors that system not benefit from
Also that industry have two specific:
1. more specific possible of in-game action, (more than atoms in universe but still) range of variants. Real life is a little different. Solving of test much easier for LLMs that find exact answer, real life problems could be even more harder.
2. poker industry have public restriction on AIâusing so we could see both development if hide AI users and public AI using. Also we could see new generation of offline tools to make people more trained bases on AI and more âlogicalâ GTO models.
Other than LLM AI industry also will be evalute to itâs very important to get new trained data maden by humans.
There are a lot of user analytics directions that didnât develop well. It connects with capitalism specifically that industries donât want to show that part of their volumes are fake, non-human.
User Analytic and its methods should have much more investment. Fonts, other very specific patterns, âheat mapâ, ingame (based on excepted money âclassical economy rationalâ and advanced patternes value) and offgame patterns. Proper systems of data storing etc. And itâs availability for users. For AI safety measures it could be collected and made much better way.
Also I find a lot of breaches in international âtheory of game systemâ. My official graduate is international law. And this is pain. We havenât law security interconnection. Crimes versus existence of humanity it not part of universal jurisdiction. Also crime convention wasnât sighed by all participants, by all countries.
Little better situation in international civil field. At least in aviation humanity have some connection, including in consumer protection. But in general situation is bad.
Consumer protection on an international level have wrong meaning. You could try to google international consumer protections. All answers will be about âhow to evade consumer protectionâ for businesses, not about how defend consumers.
Itâs very important cause people themself not systems, security or concpiracy should benefit from reports but people themself. Only that way by theory of gaming when people will benefit from reporting that way people would be attract in defending themself in AI safety. Nowadays govs grab whatever they could 0-days hacks
eg Fullstack Enterprise Data Architect quote:
âIt doesnât help that government wonât allow and encourage us to solve the problem at a systems level, because it and its major donors depend on the same mechanisms that the scammers use. So instead of making the whole wretched spam&scam market impractical, Law Enforcement plays âwhack-a-moleâ with the worst or clumsiest offenders.
* We havenât given network infra a comprehensive redesign in 40 years, because that would also block (or at least inconvenience) law enforcement and govât intel
* We canât counter-hack because NSA may be piggy-backing on the criminalâs payload
* Govât intel stockpiles 0-days hacks to use on âbad guysâ rather than getting them patched. Govât even demands âback doorsâ be added to secure designs, so it can more easily hack bad guys, even though far more often the back doors will be used by bad guys against the populace
* Corporate surveillance of the populace is encouraged, because it distributes the cost (and liability) of govât surveillance
* We donât punish grey-market spam infra providers, or provide network level mechanisms to block them, because come election time, they power political campaigns, and need something to live on outside political âsilly seasonâ
Itâs perverse incentives all the way downâ
AI abusers use automatic systems that evade security of such not under licences big systems as Binance and other CEXes.
I showed case https://ââwww.lesswrong.com/ââposts/ââByAPChLcLqNKB8iL8/ââcase-story-lack-of-consumer-protection-procedures-ai
where automatic users stole about 20 millions from people wallets. I think crypto could be one of the point of decentralized risk of building uncontrolled LLM because in crypto there are already buildings of decentralized computers that couldnât be off but have big system.
For good ventures it could be point to invest in civil case for taking more established information in the UK about AI safety. Half a million pound only to establish one of enormeous breaches in international AI safety. I pointed about this. Will be glad to see debates about that.
All these things need more researches: logical algotithms, crypto security measures, money for civil claims and other looks altruistic work. I donât see any help from society or govs. Moreover some researches and media reports even closed by pressure of users that exploit AI models .
I will be glad to appreciate any question on any of these measures. I have Aspergerâs and am not native speaker and very bayesian as yourself but ready to study, answer. I know only that I know nothing. And I am very appreciate on your attention on this or any other topic or comment I made: https://ââwww.lesswrong.com/ââusers/ââpetr-andreev