Categories: Software

When Software Glitches Kill

If you haven’t been paying attention in the last 20 years, software is everywhere. Everything to some degree has some kind of software working behind it. From kids toys, cars, air conditioners, phones, speakers, and even mirrors are now being software and voice enabled. Software will continue to invade every aspect it possibly can. That is a good and bad thing. Software in and of itself is innocuous, it is how it is developed, tested and used that makes the difference. It is in that difference where ethics, accountability and the speed of change collide.

The speed of change can affect many things besides technology outpacing laws to keep it in check. The speed of change also forces companies to compete and change at speeds never before seen in history. This need to keep up is at the forefront of every company because if they don’t, they will get passed by an old competitor or get overtaken by a new one. Ask Gillette about the dollar shave club. This is why so many companies have an innovation lab, they are trying to have a dedicated focus to improve their product – and when that doesn’t work then they will try to merge with a competitor or purchase the upstart, ask Schick who just bought Harry’s Razors or Proctor and Gamble that bought Dollar Shave Club.

Interested in becoming a certified SAFe practitioner?

Interested in becoming a SAFe certified? ATC’s SAFe certification and training programs will give you an edge in the job market while putting you in a great position to drive SAFe transformation within your organization.

While those products are not specifically software driven the example is the same, companies have to compete every day to keep their market share. While on the surface that makes sense and seems fine – it is under the surface where the problems start to rise. When it comes to software you have to be careful about your desire to win and make sure that it doesn’t overtake proper ethics in how you release your product and the software behind it. Releasing buggy software can kill and it has killed hundreds of people.

Recently we saw the two fatal Boeing plane crashes involving their 737 aircrafts and the initial investigation and afterwards showed their software was partially responsible. In 2018 one of Uber’s self-driving cars caused the death of a pedestrian due to software glitches. Or the software error of a MIM-104 Patriot Missile that caused its system clock to drift by one third of a second over a period of one hundred hours – resulting in failure to locate and intercept an incoming missile which then killed 28 Americans. Unfortunately, the examples can go on and on and this is where ethics and accountability come in.

Let me be clear in saying the majority of software bugs do not put people in any physical danger, but my point is that it does happen. In the race to be first, companies release software knowing it is not fit for release and instead of having it fully tested they are more concerned with hitting a deadline. As a population we have become accustomed to software bugs and glitches and have learned to just patiently wait for the next patch. But is this how it should be? Why not just do a good job testing and then release when you know it actually works correctly? I think our apathy towards buggy software helps drive complacency. I mean, have you ever tried to report a bug to anyone? It is close to mission impossible, so why try.

As for the ethics questions – who’s to blame? The company? The Software Engineers? The release Manager? Can anybody be held accountable – especially when people die? At some point there may need to be some type of oversight and maybe even a hotline that Software Engineers can anonymously report software that is not ready for release and could have damaging consequences if released. There is a lot of code out there that is holding human lives in the balance and we need to start thinking more deeply about that, in order to find solutions to protect the general public.

Then what about the user? What if they don’t use the software or equipment properly? Should we expect the Software Engineers to understand every potential misuse of software as well? Testing for proper use and function is one thing but what about testing for improper use and outcome? Should developers work closer with the end users to understand all the different ways their product is used? I can keep going back and forth on this all day because this where it starts to get really muddy. These are all tough questions that need answers and as one CIO told me “buggy software glitches are not that big of a deal” is not the answer.

Kelsey Meyer

Recent Posts

Optimizing AI Workloads with Kubernetes & Docker – AI Model Deployment Strategies

Deploying AI models in production used to keep us up at night. One day you're…

1 week ago

Hybrid AI: Combining On-Premise & Cloud AI for Enterprise Use Cases

Here's the thing about hybrid AI. It's actually a pretty smart way for companies to…

1 week ago

Building AI-Powered APIs With FastAPI & Cloud AI Services

Introduction So here we are in 2025, and we are still getting asked "should we…

2 weeks ago

Transformers vs RNNs vs CNNs: Choosing the Right AI Architecture for Product & Scale

While everyone was chasing the latest Transformer breakthrough, a funny thing occurred. We hit a…

3 weeks ago

How to Train Your Own Neural Network from Scratch – A Practical Introduction to Deep Learning

You can use TensorFlow or PyTorch for years and still feel like you're working with…

3 weeks ago

The Tech Behind AI-Generated Images & Videos

Just a few years ago, generative AI technology was confined to academic research labs. Today,…

3 weeks ago

This website uses cookies.