Tag Archives: ethics

The Ethics of Gene Editing

With the ability of scientists to genetically modify and edit one’s DNA, they have not only unlocked the possibility of curing diseases they are even diagnosed but have also opened a gateway to the unknown that offers little to no protection, security or nonthreatening state of mind. As said by Andrew Joseph, “People have access to more information about their own genes — or, in this case, about the genes of their potential offspring — than ever before. But having that information doesn’t necessarily mean it can be used to inform real-life decisions.” Gene therapy is still in its beginning stages and the reliance that people have on it being able to help them make decisions about their well being and the well being of their offspring should be taken with a grain of salt. These genetics tests or treatments are not always completely accurate and due to this, should not be significant factors when thinking about the ethical implications of a mother and father deciding what to do with their two week old fetus that is tested, and comes back positive, for a gene that could harm them in the future.

Bioethicist Alta Charo on the other hand does not see that the implications of gene therapy will affect the ethics of parents choosing “designer babies” or of the fears by governments that it will induce a new population of people against one another. As such, her comment that “genetics doesn’t tell us everything we need to know…they have tremendous influence … but we don’t have to assume that by having genetic information we will abuse the choices it facilitates,” implies that the worst case scenarios that may occur with gene therapy, may not even happen. China’s leading CRISPR researcher Dr. Lai Liangxue sees gene therapy in a similar light as the atomic bomb, noting, “I say that depends on who use it, right? Like, like, atomic bomb. That’s kind of — if you use it to make electricity, it’s good. If you use it in a bomb, it’s bad.” So, the fear that an entire new race of species of human can be created within a few decades can be seen as unfounded. There are just too many concerning factors but as can be seen from scientists whose research is the embodiment of their entire careers, they are too inclined to see the negative implications that their work can bring and instead seek to call these fears as irrational and unnecessary. These scientists are exhuming their confidence in the human race by noting how most are both rational and good but the underlying theme is, especially using Dr. Liangxue’s example of the atomic bomb, is that the human race is not all born innately good and since if there is a possibility that gene therapy can pave the way for dangerous experiments, someone will and can use it to their advantage.

In noting the comments made by Dr. Liangxue and Dr. Charo, the question then is, are these fears actually irrational and unnecessary? Because if they are, why does the DOD and Pentagon want influence over genetic treatments? Not for nothing, when a nation’s military and security sector want to track and monitor genetic treatments, the unfounded fears that scientists debate against are certainly something significant enough to deem relevant, to say the least. When the Pentagon sees genetic treatments as ways to cause injury to thousands by “agents of war,” there needs to be a wake up call within the healthcare community. The FDA is just no longer enough to prevent genetic therapy from reaching those who do evil and even with its regulation, there is no saying just how much control they will have over medical and genetic treatments. Although it can be argued that in the coming age of CRISPR and other gene therapies, the need for everyone to have the same and equal opportunity to get information will be significant in preventing the ethical and moral issues that can arise when it is not given out.  Best stated by journalist and historian, David Perry, “the pro-information approach demands that everyone involved in genetic counseling have access to the best data and presents it in a value-neutral way.”

The issues that can arise from the lack of motivation on behalf of the gene therapy community to speak on the negative effects of gene therapy because of their involvement in their own research and development of their treatments will be greater than just sidestepping the issue until more government agencies get involved.

Ethical Implications of Autonomous Vehicles

Autonomous vehicles will one day become the normal way to travel. However, there are still plenty of ethical issues and dangers this new technology holds. It is easy to forget about the impacts technology has on society because everything is innovative and exciting. Nevertheless, the ethical implications must be discussed in order to avoid potential accidents or tragedies. The possible dangers might include the likelihood of a hacking or if the vehicle does not make a decision fast enough. Some of the ethical effects can be what kind of decisions should be made in certain situations and if programmers should have transparency with consumers, so they know exactly what the autonomous vehicle entails.

These cars rely on machine learning in order to evaluate situations and make a decision. The computer is simply not given a set of rules to follow. Instead, it is fed images of objects – for example, a pedestrian, a ball, another vehicle, etc. – and tries to guess what that object is. In the beginning, it will guess wrong. However, as time goes on, the program adjusts itself, and continues to try as more information it is given, it begins to learn what is what with the help from the cars sensors. If there is unidentified object in the road, the car should reduce speed or stop altogether. Another way for the vehicle to learn is by feeding it certain traffic information and the programmer would say the right way to get out of it. The algorithm will learn from that, along with other aspects of a situation, and determine the correct way to get out of it. One of the main issues is how fast these computers will be able to make decisions. While humans drive, it takes a split second to make a mistake and terrible things could happen or it takes just as long to avoid that. Most people have good reflexes and are able to avoid a tragedy. Autonomous vehicles must have that same ability to avoid accidents. Machine learning and quantum computing must work together in order to allow those cars to make effective and quick decisions. The programmer must teach the computer the basic rules of the road and allow the machine to learn impactful and successful ways to avoid accidents.

Another issue that stems from autonomous vehicles is the possibility of someone hacking the program of one. Skilled hackers are able to break into almost anything, so how is a vehicle any different? Software and algorithms will power them, which can be susceptible to malicious people who have the intent to harm others. It might not be a trend now, however, as this technology becomes more normalized, anything is possible. Car and technology companies claim that the consumers will be safe from hacking, however, it would be in the best interest of the consumer to not take their word. Some in the industry have waited to announce a recall until it became cheaper than paying the wrongful death lawsuits. It would take enough steps in the right direction for car and technology companies to act in an ethical manner. It would be best to not trust their word. As the conversation about hacking becomes more popular, there should be laws requiring cars to have certain types of encryption and cybersecurity in order to protect the passengers of autonomous vehicles. These companies should also consider the approach of a greater transparency with their consumers. That way, if they are ethical and genuine, consumers could trust using their products, in this case an autonomous car.

Though autonomous vehicles will be able to solve many problems and subtract human error, it is important that it does not replace it with programming errors. It is also essential that the algorithm learns in an ethical and correct way. It may take time to determine what that is, however, the minimum amount of accidents and casualties is desired. Car and technology companies must implement sophisticated cybersecurity to protect against any type of hacking that might take place for any reason. There is a long road ahead when it comes to creating the software in the right way. That way when it learns, it learns in a correct and moral method. Machine learning plays a big role in the success of autonomous vehicles and though there will be problems, it would be beneficial to minimize them as much as possible.

Automated Vehicles: Questionable Ethics

Ethics is hardly, if ever, the starting point of a conversation about automated vehicles. The truth is that in the world of automated vehicles ethics can be something that is often overlooked or forgotten about entirely, seemingly taking a backseat in discussion. In this blog post however I hope to rectify this immediately because after reading an article last week, ethics in autonomous vehicles have been the only topics that I have been researching. Pulling no punches, the article in question looks at Mercedes-Benz  as the first car manufacturer to release their software information and brings up a rather blunt initial question: in the event of a unavoidable crash would you want your new self-driving vehicle to prioritize your own life as the owner of the vehicle or the lives of several innocent children? In so many words the scenario is broached like this: suppose you were in an autonomous vehicle and a car was on the wrong side of the road. The software driving the car now has a decision to make: it can either swerve left into oncoming traffic, putting you in immediate danger or, option two, swerve the car to the right onto the sidewalk and potentially harm a group of children walking home from school. The decision is not an easy one, nor is it one to be made lightly.

Regardless, as per the article Mercedes has now given their answer to this question: they will swerve to the right and run over the group the children on their way home from school. Now to some people this may be the clear decision to make in this situation but some may still be wondering exactly why Mercedes has gone in this direction when programming the software for their autonomous vehicles. To help understand we will take a look at a moral issue very closely associated with this dilemma, the “Trolley Problem. The Trolley Problem is a thought experiment developed by Philippa Foot in 1967 which involved a trolley coming down a road where men were currently working. If the trolley stays straight on its current path, it will kill five people on the tracks, however if you switch a lever, the trolley will instead go down a different path only killing one person. The main question raised is obvious: what is the right decision to make in this situation? This has been the moral dilemma as we’ve understood it for many decades, however now car manufacturers have to address this issue with a whole new layer of complexity added to the equation. Below this paragraph I’ve included a video to help explain the idea of this problem before we go any farther to help clarify any questions you may have. 

As I’m sure most of you could probably have concluded by now, in the real world of self-driving cars this problem is more than just an ethical dilemma, it’s a PR bomb waiting to go off. Just think about the car companies that will soon be designing and programming the new autonomous vehicles of the future and which car you would rather drive, the one that prioritized your own personal safety at all times or the one that prioritized others before you; I know which I’d rather drive. The fact of the matter is that if Mercedes-Benz (or any other automotive company for that matter) prides themselves on customer satisfaction it would make absolutely no sense to have cars programmed not to prioritize customer safety. Essentially, (under the assumption that there would still be automotive accidents) Mercedes-Benz would effectively be designing “death cars” in the eyes of their customers, not a very good business strategy at all. Looking at another article similar questions are brought up revolving around the difference between a car and a motorcycle: is it better to hit a car or a motorcycle in this situation? The unfortunate fact is that all this is a catch 22 situation, not helped by the fact that right now there is no law in place to lead the car manufacturers and developers in the legally “right” direction. Therefore, there is no question that it makes the most sense that the manufacturers are taking the Mercedes-Benz approach and developing with the customer in mind. What are your thoughts on this situation? Should automotive companies be taking this kind of approach in the future or is there perhaps a better solution?