Artificial Intelligence
In our last newsletter, we talked about the rapid advances in the form of Artificial Intelligence (AI), called Generative AI (GenAI). At that time, GenAI had been available to the public and making news for about a year. As 2024 begins, we may be in the early stages of shifting away from discussing hypothetical risks to having more practical and applied conversations surrounding AI.
Reflecting on last year, we saw a tremendous amount of hype around the use of AI. It seemed like every company, from Big Tech to small business non-tech organizations, was announcing new products, product enhancements, or other forms of service improvements thanks to the “magic” of AI. It is hard to tell how much of this was real vs marketing. However, I think over time, we will see that although there may be many potential applications of AI, only a small portion will have utility in the near term. Why is this? Although GenAI has caught the world's attention, and seems like AI is a brand-new technology, the truth is that it is not. Yes, GenAI has helped to advance AI by creating an easy way for people to generate content quickly, but AI itself has existed as a field of study, and the uses of AI date back to the mid 1900’s.
In 2023, we learned a lot about how large language models (used to support GenAI) work, but we do not fully understand them. These models can make stuff up, contain biases, be used to hack private information, and infringe on intellectual property. Furthermore, the models can be unpredictable, and it appears they may require considerable amounts of electricity to work, which may not yield a positive outcome in reducing our carbon footprint.
Last year, there was much discussion around AI's potential macro risks. Some of this conversation was based on the idea that one day, society will build an AI that is smarter than humans, which could lead to serious consequences. Cue scenes from 1984’s “The Terminator.” Both sides were heavily debated, and I do not know any better than you if this will be a reality or not, but what I do know is having this type of conversation should be used to help drive what society views as appropriate and responsible use of AI. However, at the same time, if we spend too much time focusing on extreme hypothetical risks, that means we are spending less time discussing the real harms AI may be causing today.
As 2024 begins, we may be in the early stages of shifting away from hypothetical risk discussions to more practical and applied conversations, such as those related to copyright and regulation. Question is, will we see AI regulation happen quicker compared to previous advances in technology? Last year, it seemed like every government body from individual states, congress, the EU, and G7 countries were talking about AI policy and regulations. The White House issued an executive order around trustworthy development and use of AI. We saw voluntary commitments from leading AI companies to be more transparent about AI standards. I do not recall this quick response around social media platforms, so maybe we are making progress.
So, what does this all mean? That is a great question and if you know the answer we should talk. All joking aside, no one knows what it all means, and if anyone claims to know, be cautious of their wisdom or words. My recommendation to you is to do your own research and become informed on the various viewpoints and perspectives. The topic is not going away anytime soon. I have learned over the last year that the extremes of AI, positive or negative, are just that, extremes. Real productive conversations happen when opposing viewpoints from all individuals come together in conversation to explore if and how AI should be used.