The dissonance between AI and learning

Yesterday one of my colleagues showcased a very creative idea she helped her young son implement for the school fête. Instead of selling sweets or cakes or lemonade, she helped him setup a song generator: “step up and we’ll generate a song just for you!” The concept was deceptively simple: give a short prompt about the person to the Freshbots AI song lyrics generator. It writes lyrics around your chosen subject, which you then copy and paste into Mureka which sets those lyrics to music in whatever style you choose. I think we all know these tools are out there, and we’re all experimenting with them – but occasionally I still find myself astonished to the point where my worldview feels like it’s tilted on its axis. This was one of those times!

I gave it a go immediately after the meeting. I wrote a short prompt about my dog Otti (who I still love to bits even though he has passed on now ❤️). Lyrics were forthcoming almost immediately. A few minutes later after copying and pasting them into Mureka, voilà: I had 2 versions of the song that I could compare to see which appealed most. The 🎶 song I liked best 🎶 was ridiculously cute and captured the essence of all the things I loved about that dog. I sent it to my mother and she was actually moved to tears!

But it also set me thinking…

As a child, I felt a strong pull towards music. I wanted to be able to play the piano like Glenn Gould one day! This resulted in my grandmother’s piano being moved into our house and the commencement of piano lessons. Over a period of 10 years I honed my skills with many hours of practice – by my final year of school most days involved at least 2 hours of practice time if not more. I continued this quest at university where I enrolled in a Bachelor of Music. Eventually I moved on from music into IT which has suited me better as a career, but I’ve subsequently learned to play the accordion (which was relatively easy to pick up given my musical background) and playing it gives me great pleasure – even though it’s an activity I reserve for me, myself and I!

Two thoughts spring to mind when contrasting my experience with that of today’s seven-year old.

On the one hand, if I was drawn to music now, would I spend the next 10 years of my life learning how to play the piano really well? I feel like perhaps not. When you see that you can spend 5 minutes on a fairly mediocre prompt and create music that is capable of entertaining (or even moving people to tears), what incentive is there to begin the actually very arduous process of learning how to physically produce music using a real instrument? Even though I ultimately didn’t become a working musician, I am immensely grateful that I spent those years learning how to perform music and do it well. It helped me develop tremendous self-discipline. It honed my cognitive abilities. It helped me develop the skill of self-reflection. I learned resilience in the face of failure and disappointment. The list goes on. By learning anything the hard way, we not only hone the skill we are striving for but develop a whole lot of other skills along the way. I feel concerned that by making things too easy AI tools will reduce our incentive to pursue knowledge and skill in this way which may leave us poorer emotionally, intellectually, and even socially.

On the other hand, I do find it astonishing to have access to the tools that enabled me to generate a song about my beloved dog! Although I devoted so much of my time to music, my focus was on performing not on producing. I’m lousy at writing poetry and music; neither am I rich enough to commission a song about my dog from someone who can! In that sense AI enabled me to do something I was never going to be able to learn to do for myself anyway.

As a result I am left feeling somewhat conflicted!

I find this kind of dissonance is present in the workplace too. I came late to coding: I started learning to code in 2017 and it was quite a steep learning curve for me! But again, the struggle produced real understanding of the different ways objectives can be accomplished, and how and why some solutions might work better than others, depending on the situation. At this stage of my career I consider myself to be just a competent coder. However, producing code that can run well in Jupyter notebooks is not the same as producing elegant, efficient, robust code that runs in production. I aspire to become an excellent coder!

With the introduction of AI into our daily regime I find, however, that this is almost becoming a goal that I need to pursue in my own time, rather than something I can work on in the workplace. And here is why: the use of AI tools like ChatGPT and Copilot is expected to accelerate productivity. And, while it is also true to say that we are exhorted to check and understand the outputs before including them in our codebase, this is by no manner of means the same as having produced the code yourself. If I think back to school days it would be the equivalent of the teacher saying ‘Read Michael’s essay and see if you think it will pass muster as a submission for your own essay – maybe with a few tweaks.’ What would you learn from that? Very little!

I consider myself fortunate to have learned to code in a time when the struggle was still real! There was Google, there was Medium, there was library documentation and there were platforms like Stackoverflow and Kaggle. You started with a blank slate and had to first break a problem down into its component parts to understand it. Then you had to conceptualize how the problem might be solved – usually exploring several different options. And eventually you had to start to code – only to discover that your code took 30 minutes to run and you had to figure out why and how to optimize it. Like with learning music, one did not just learn how to code: there were a host of associated skills that were developed along the way like how to articulate the problem and its parameters clearly, how to research options, how to conduct experiments and evaluate results, not to mention identifying that fine line when it’s time to stop investigating and ask for help.

The other day at a meetup I overheard a conversation that went along the lines of ‘That guy can’t write a line of code unless an AI helps him’. I could only feel sympathy for this guy, whoever he is. Of course he can’t! If you’re near the beginning of your coding journey in the workplace (or even, if like me you’re keen to improve your existing skills) you can make one of two choices: code the solution the hard way which will cause you to take 8 hours to complete the task and result in real learning; or code the solution the easy way with the aid of AI tools which will cause you to take 2 hours to complete the task and result in a superficial understanding. In a competitive workplace (and all workplaces are to a degree) it’s difficult to take the 8 hour route because you will look bad next to your peers. But at the same time, no learning is taking place – causing you to look bad next to your peers! It feels like being between the inevitable rock and a hard place.

With prompting AI tools, it is still very much the case that the more clearly you can articulate the problem and the parameters of the desired solution, the better the output will be. However, if learning is stunted too early on in the process, those abilities will not be developed to the same degree – resulting in sub-optimal outputs. Furthermore if your basic grasp of best practices is a bit shaky then it may undermine your ability to effectively review AI outputs. For example I have seen ChatGPT recommend code that processes data in a for loop. If you have not truly grasped how much more performant list comprehension can be you might just use that code verbatim – it will work, but to the detriment of the performance of your program.

The other thing is, humans are inherently lazy (and I include myself here!). Daniel Kahneman, in his book Thinking Fast and Slow outlines how we are inclined to prefer System 1 thinking (“fast, automatic, frequent, emotional, stereotypic, unconscious”) over System 2 thinking (“slow, effortful, infrequent, logical, calculating, conscious”). Constantly reviewing work that is not your own plays into System 1 thinking: it does not encourage creativity or criticality! Only once you start to really grapple with something does System 2 get activated – this is the sweet spot where you start to apply your mind to the best way to solve a problem, and learning and growth take place.

My personal resolution is to purposefully set aside a portion of my time for effortful learning activities: whether this is learning to produce more elegant code, or learning a new piece of music on my accordion. Even writing this article fits into that category! It would have been so easy to have an initial thought ”I’m concerned about the potential conflict between AI tools and learning” and prompt an AI model to generate some ideas for me on this topic. But I didn’t. I’ve sat with the thoughts for the last while and let them percolate. I’ve tried to think through what the potential pitfalls are, but at the same time also to understand why these tools are so compelling and useful, and I’ve tried to think about what I can do to mitigate against their possible negative side-effects. By starting with a blank page and trying to articulate where I’m at I’ve reached a deeper understanding of myself in relation to changes in my industry than I would have by reaching for a quick answer.

To sum up, I would like to try to remember that effort is its own reward and to make space in my life where I can work on reaping those rewards!