In anticipation of the next 12 months of innovation and exploration, what are some of the areas we should pay particular attention to when it comes to artificial intelligence?
It is hard to believe it but we are already almost one full month into the new year and with that in mind, it can be helpful to look ahead at how you want to improve and grow over the course of 2026.
If you are looking to have a happy and productive year at work, especially if you are in the STEM field, it could be useful to turn your attention to a topic that is always in conversation – that is, artificial intelligence (AI), its potential to pose a risk and how you can mitigate dangers.
So, let’s jump right in: what are some of the AI-related challenges professionals should keep an eye on in the coming months?
Weak regulation
This is perhaps the most high-profile example of where AI is failing currently, as the conversation around the lack of regulation and policy in certain areas of innovation is at present a major issue, whether you are in the workplace or not.
Elon Musk is in the hot seat for a failure to quickly and effectively crack down on the misuse of his Grok technology, which is being used in some cases to create explicit and illegal materials. In response, a number of global regulators have expressed their deep concerns for where this could lead.
For example, bodies in Malaysia and Indonesia have eliminated access to Grok over explicit deepfakes, the European Commission announced it is looking into cases of sexually suggestive imagery, and the Irish media regulator Coimisiún na Meán said that it is engaging with the European Commission over Grok and has also engaged with An Garda Síochána on the matter.
And they aren’t alone – Australia, Germany, Italy, France and the UK have all expressed a concern in how advanced technologies can impact safety. So for 2026, it is crucial that professionals ensure that they are prioritising ethical, transparent and compliant AI technologies.
No future knowledge
Globally, we are in a position where we can envision a quantum future, even if we aren’t quite there yet. That is to say that human beings – by their natures – are dreamers, constantly imagining all of the possibilities at once and working towards that eventual outcome. When it comes to AI, there is an argument to be made that we overshot a little; while we have the technology to get it up and running, for some experts, AI adoption is greatly outpacing related security and governance.
This can create a host of new threats. An IBM report, published in the middle of last year, found that organisations are increasingly bypassing security and governance for AI, in favour of the faster adoption of technology. This can potentially expose the individual and the organisation to much greater risk than if companies had adopted a more measured, strategic approach.
A recent Allianz Risk Barometer for 2026 found that AI had “climbed to its highest-ever position of number two, up from number 10”, as both cyber and AI are now ranked as among the top five concerns for companies in almost every industry sector.
It kills motivation
Compared to the real-world dangers of vulnerable security systems and the potential for illegal usage, AI causing a lack of upskilling and motivation in professionals may sound trivial, but it is an element of AI technology that could significantly impact or even derail someone’s career ambitions.
Research suggests that an over-reliance on AI in an educational setting can limit creative and critical thinking, as those trying to learn instead use technology in lieu of their own research. People are at risk of skill decay, which is essentially the atrophying of your own skillset over time as you outsource too much of your work and thinking to AI.
After a while, you may find that you lack motivation for your job, that you are encountering elements of the work that you no longer understand fully and that there are inconsistencies in results or outputs. As we all know by now, AI cannot be trusted; everything that you ever use it for needs to be reviewed and fact-checked by an actual human being.
Not sustainable
As we hurtle ever closer to 2030 and the commitments we made to ensuring a safe and green planet for all, it is becoming apparent that the commitment made by some to AI innovation could be standing in the way. AI infrastructure, such as data centres, is notorious for the level of waste produced, as well as requiring large quantities of water, critical minerals and rare elements. These are often harvested in an unethical, unsustainable way, resulting in further emissions and contributing to the worsening climate crisis.
There are innovators, however, who are working towards developing usable minerals and processes that don’t require as many natural resources, thereby reducing the effect on the planet.
If you are a professional who aims to be as green as possible, despite working in a field that is not always associated with sustainability, then AI could be an area you bring more awareness to as you endeavour to find more sustainable ways of working, encouraging others to do the same.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
