Technology professional with over 30 years of experience who is currently evolving, fully transforming their focus fully into the world of AI. What started as a deep dive into the use of Copilot Agent mode in VS Code to accelerate a new project, grew into a deep and unsatiable hunger for knowledge in this area and finally into research.
Richard Amerman
Karma: 4
Hello,
I’m very happy to be here!
Unfortunately I’m only just bringing LessWrong into my life and I do consider that a missed opportunity. I wish I had found this site many years ago though that could have been dangerous as this could be a rabbit hole I might have found challenging to escape, but how bad would that have actually been? I’m sure my wife would not have been thrilled. My reason for coming here now unfortunately, especially at this point in time, is very unoriginal. In the last eight months I’ve taken what was a technology career possibly in its waning years, into a new world of wonder and exploration, and yes I’m talking about AI. I’ve been in technology for over 30 years and certainly have paid a little bit of attention to machine learning and AI over this time span but somehow just kind of missed what was really going on in the last two years. I think I was overwhelmed by the level of hype that I was running into and how shallow it often seemed talking about magical prompts that would give you miraculous results and I just assumed that things weren’t really in a very good space. Yet I was very wrong and I’m glad I didn’t wait even longer to discover the true state of things, though not all of it is good.
I’ve been working for 6 months using AI all day long at my day job, using Claude Code and many other tools doing development and platform engineering work. It’s really been in the last a couple months that I’ve started to look more seriously what I found compelling in the world of AI and I kept coming back to one of my earliest observations formed during my early re-engagement of AI this year. That was an instinct that hit me right away after discovering what the new world of LLMs had to offer and that was that they were very clearly to me fundamentally flawed this wasn’t based on any deep understanding of the training process of how LLMs work, though it was reinforced based on my expanding understanding of this subject. It first started as I did extensive experiments in my use of AI to do work. I’ll cut to the chase and just state that it seemed clear to me that it was highly unlikely that LLM’s were going to lead to AGI, or at least as I view it.
Learning and knowledge has always been a very dear and important topic for me. I have never stopped picking apart my understanding and model of how learning works, at least for myself, and what makes the process more constructive, healthy, and valid. In reading some of the Sequences, though I have just barely scratched the surface, it is clear this is community I’m excited to have discovered and one I’m looking forward to participating in. While I easily accept and can be content at a new AI carrier that mostly involves development and engineering in the world of LLM’s, my real interest lies in trying to imagine explore the space of what in my mind would have an actual chance at achieving AGI. I’m not interested in just building towards a challenge, this point is relevant as I started to think building something to match against ARC-AGI would be a great way to learn and explore, I’m more interested in trying to work out an idea of how an AI model could not only do real learning, reaching actual comprehension, but is capable of building its own world model, one distilled nugget of understanding at a time.
One goal of this work was formulate this vision mostly in isolation as a great way to really stretch my mind and see where I could go on my own. I digress, but this is the direction that led me here. I was talking to a few people at a local AGI event and they recommended that my first article on my vision would be ideal for LessWrong.
I while I’m still days from having that article ready, I had an experience this morning that inspired me to write a quick article that seemed like a good first post for this site. I made sure I digested the guidelines, especially the one on LLM generated content. I do most of my writing that involves bring lots of pieces together with the aid of AI, mostly to help organize, make larger edits, and to help me analyze my own writing. That was the case with the piece I wrote today and posted here. It was rejected, and while I have nothing at all critical to say of the reviewer, especially considering the work load that must be present these day, the main stated reason was the LLM policy. Put simply, this work was my content and words. I just copied the everything in this comment other than this last bit into JustDone and it declared that it was 99% AI content. I wrote every word of this in real time in the comment box of this page. While I can make no claim to understand the process the moderator used to make their determination, I hope to get this figured out before I am ready to post my piece on distillation of knowledge into a world model. I fear that an old wordy and writer like myself often sounds more like an AI than a modern human. :-)
Sorry for the overly wordy first post, but I look forward to interacting and collaborating in the future!