I posted a recent video that showed how to use ChatGPT to generate code for Arduino projects - it was a fun little experiment - it worked really well. Worth a watch if you want to see some of the coding capabilities of ChatGPT.
I also use Copilot in my daily work and find it useful. It’s helpful when you jump around many different languages and libraries - so you know how to do something - but you often can’t remember the particular magic incantation.
I’m absolutely fascinated by how people are reacting to this new wave of AI tools. Now I can’t really speak for people in the creative industry, I am very heavily tech and wouldn’t know good creative content if it slapped me in the face (see my YouTube channel for evidence of this!)
But from technical folk, I see a complete range of reactions.
- The cynical - “meh, it’s not that great - you’ll never replace humans with AI”
- The scared - “it’s the end of days” people who see it as an existential threat to their livelihoods and work
- The enthusiastic - “this is amazing, I can’t wait to see what happens next”
- The luddites - “it’s impressive technology, but I’m not going to use it or allow it in my company”
I can really relate the “meh” people. It’s a perfectly reasonable reaction. We are constantly bombarded with things that are billed as the next best thing since sliced bread. And most of the time it is just marketing nonsense and hype (anyone remember Duke Nukem Forever, Year of Linux Desktop, Flying Cars etc…). Things like the Dot Com boom and bust would have been less painful if people had just been a little more skeptical.
We’ve also had several AI winters - it’s often the case that the hype is far ahead of the reality.
But I think the “meh” people are missing the point. This isn’t about replacing humans with AI, it is about augmenting humans with AI. We’re a long way from AGI (Artificial General Intelligence).
It’s about making us better at what we do, making us more productive, enabling us to be more creative, more efficient, more effective.
Placing the bar as high as replacing humans is a mistake. It makes it far too easy to dismiss the progress that’s being made. You can easily find examples where the AI makes stupid mistakes or produces nonsense output - and then use that as a reason to dismiss it.
“Arrrgh, the end is nigh”
I can also understand the “end of days” people. I’ve sat there and watched Copilot happily fill out boiler plate code and tests for me and thought “hang on a minute - all I’ve done for the past 5 minutes is hit the tab key a few times… how long will they keep paying me for this…?”
But really, the end of days people should take a healthy dose of cynicism from the “meh” people. There has been some amazing progress in the just the past couple of years and new models are coming out all the time (GPT4 should be out early this year). But the knowledge that these models are based on has all come from humans - so much of it is deeply flawed and biased. The output must be verified and checked before it can really be used.
For senior engineers and architects - you suddenly have at your fingertips an amazing resource that can help you. For junior engineers and people just getting started, you know have on tap a senior engineer that can help and guide you through solving problems.
Having said that, these newer models almost seem to have come from nowhere, if you’ve not been paying attention, then this almost feels like it’s come out of the blue. We are facing disruption and change - and that is frightening and it may indeed cause considerable change in our industry.
“Sign me up!”
As you can probably tell, I’m obviously in the “this is amazing” camp. I want AI to pick up some of the hard work and drudgery so that I can actually be creative and do fantastic things. I want a pair programmer looking over my shoulder, making suggestions, adding to my knowledge, pointing out my bugs, and helping me when I hit a wall. Now I could be signing my P45 (for American readers - Pink Slip) but I do wonder - if a computer can do my job better than I can, maybe I should be doing something else.
I’m also a big fan of the “AI is a tool” way of thinking. It’s an augmentation, not a replacement.
“I’m not going to use it”
The luddites are the people that I struggle to understand. Their reluctance is often couched in potential legal issues - “what about copyright?”, “what about liability?” - to me these are all valid issues, but they are often just used as excuses to not even look and evaluate the technology.
My fear for these people is that they are going to be left behind. Or worse, they are going to suddenly discover that despite their best efforts, the business and people they work with will move on without them.
Talking with other people in the industry, I’m already hearing anecdotal evidence that developers are using these tools anyway - they are just not telling their managers - or they don’t actually see any issues with it as all their friends are doing it too.
What should we do?
I think the best way to deal with this is to embrace it, learn about it, understand it, and use it. If you don’t, then you are going to be left behind - everyone else is jumping on board - don’t be left behind!
A quick note on the ethics of it all
I’m not going to go into this in any detail - it’s way above my pay grade and there are other people who are much better equipped to discuss this. But it is an area that is going to need to be addressed and I think it will need to be addressed sooner rather than later.