The reasons you give btw don’t give me much consolation. The code leaking thing is very temporary; if you could host cutting edge models on AWS or Azure it wouldn’t be an issue for most companies. If you could self host them it wouldn’t be an issue for almost /any/ companies. The errors thing is a crux. The basic solution to that, I think, is scaling: multishot the problem, rank the solutions, test in every way imaginable, and then for each solved problem optimize your prompts till they can one-shot, keeping a backlog of examples to perform workflow regression testing against.
The style thing is very tractable, AIs love following style instructions.
The big moment for me was realizing that while each AI’s context window is limited, within that window you can ask LOTS of different questions and expect a pretty good answer. So you ask questions that compress the information in the window for the purpose of your problem (llm’s are pretty darn good at summarizing), and keep doing that until you have enough context to solve the problem.
The reasons you give btw don’t give me much consolation. The code leaking thing is very temporary; if you could host cutting edge models on AWS or Azure it wouldn’t be an issue for most companies. If you could self host them it wouldn’t be an issue for almost /any/ companies. The errors thing is a crux. The basic solution to that, I think, is scaling: multishot the problem, rank the solutions, test in every way imaginable, and then for each solved problem optimize your prompts till they can one-shot, keeping a backlog of examples to perform workflow regression testing against.
The style thing is very tractable, AIs love following style instructions.
The big moment for me was realizing that while each AI’s context window is limited, within that window you can ask LOTS of different questions and expect a pretty good answer. So you ask questions that compress the information in the window for the purpose of your problem (llm’s are pretty darn good at summarizing), and keep doing that until you have enough context to solve the problem.