Do we need to regulate AI now before it becomes a danger to humanity?
Technological advancements in the past three decades have changed the world as we know it at an unprecedented pace. We can now connect globally, instantly and are in the process of improving centuries old processes; think Ford’s assembly line, with robotics and artificial intelligence.
It saddens me to think that a technology that could improve the lives of billions, like implementing autonomous farming to ensure all of the worlds peoples are sufficiently fed is being warped into creating new age killing machines.
Experts on the subject like Elon Musk, Steve Wozniak and Steven Hawking have all come together to sign a letter backing a ban on autonomous weapons much like the UN already has on chemical weapons.
With the; US, China, Russia, Israel, South Korea and Britain all currently working towards building these autonomous weapons the dangers must be seriously examined. Worst-case scenario, some imagine hackers taking control of these systems and making the world of the terminator a reality. Best-case scenario, humans get even better are killing each other.
I stand with the technologies industry leaders and believe these weapons should be highly regulated. The risk to reward ratios are just too high. If these weapons fall into the wrong hands we will be facing an enemy built to kill that we literally created ourselves.