<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2797612&amp;fmt=gif">
Is data ethics an oxymoron? The big data debate remains a hot one as we continue to define, understand and navigate the role of data ethics in the digital age.

 

Recently, our founder and CEO, Greg Woolf, was invited to speak at the MIT Enterprise Forum “Data Ethics: Exploring Vice and Virtue in Big Data” event along with a knowledgeable, diverse group of panelists. It was a great opportunity to explore ethical questions and discuss a topic that evokes strong opinions and viewpoints. As the scope for digital risk continues to expand, the question of how we build digital trust is critical.

“With great power comes great responsibility.”

– Voltaire

This means that if you have the ability to do something, make sure you’re doing it for the good of others. With our world now immersed in powerful technology, the result is big data… So is data the new world power? Globally, across all industries, individuals and businesses are truly embracing the power and competitive advantage that good data can unlock and reveal.

 

Data analytics and insights are influential tools; and, if applied and analyzed correctly, their impacts can be revolutionizing. Conversely, if used incorrectly, there can be unethical and negative effects. Thus, the result of these outcomes puts great responsibility in our hands. What we do with, and how we handle, all of this information is a responsibility that continues to expose both ethical obligations and transformational changes across the landscape of business practices.

Ethics, beliefs and principals help us determine is something is useful or creepy. Technology is progressing at an exponentially fast rate, but our ethics, social contracts and laws remain linear.[image_with_animation image_url=”1705″ animation=”Fade In” hover_animation=”none” alignment=”center” border_radius=”none” box_shadow=”none” image_loading=”default” max_width=”100%” max_width_mobile=”default”]Data Ethics Defined: Data ethics is a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values).

Do data collection and data analysis practices require a framework or code of ethics? With major consumer privacy legislations like the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act of 2018 being implemented, we are already seeing how advances to data privacy will be shaped by policy. We need to be having these conversations early and often, especially as consumers become more aware of their digital footprints.

Furthermore, different people have different values, principles, beliefs and convictions, so including a diverse range of perspectives is valuable. This was evident at the MIT Forum discussion. The evening opened up with the moderator presenting the audience with various scenarios and asking them whether or not they believed those scenarios were ethical or unethical.

For example:

  • Is it ethical if a medical practitioner collects your data to provide you with better healthcare?
  • Is it ethical if an insurance company collects your data to determine your premiums?
  • Is it ethical if an insurance company collects your data to determine the premiums of your family members?

Some of the issues that surfaced in the discussion included:

  • Transparency and lack of informed consent
  • Companies profiting from personal data
  • How data affects family members
  • Insufficient access to, and control over, personal information
  • Concerns regarding costs vs. benefits

In this digital age, analyzing and utilizing insights from data has introduced new classes of risk. This includes unethical- or even illegal- use of insights, amplifying biases that intensify issues surrounding social and economic justice, and using data for purposes without consent. Ultimately, these types of practices can permanently damage consumer trust.

Artificial Intelligence (AI) and Machine Learning (ML) are becoming a big part of this discussion as they become increasingly widespread. The goal is to control risk while allowing enough flexibility to make it possible to take advantage of the potential benefits of these data technologies, either now or in the future.AI researchers pride themselves on accuracy of results; however, AI applications and machine learning algorithms are at risk of discriminatory practices and illegal bias. If an algorithm is making inaccurate or unethical decisions, it might mean there wasn’t sufficient data to train the model, or that learning reinforcement wasn’t appropriate for the desired results.

This issue, called algorithmic bias, has been identified in a variety of contexts, including judicial sentencing, credit scoring, issuance of Medicaid, education curriculum design, loan approval, and hiring decisions. It can originate in the development of an algorithm even when there is no intention of bias or discrimination. Greg identifies this as the “modern risk.” He explains that, “the data is out, but what we do with that data is truly the hot issue and the cost can potentially be very high.”

So, what can we do? How will the standards of accountability, transparency and recourse in data ethics and AI systems evolve? We need to stay educated and that society will take care of these issues. With oversight over algorithmic bias and data ethics, society will reach an appropriate balance between the risks and benefits of data technology.It’s an exciting time as new technology is being developed to solve key problems faced by consumers, businesses and the world at large. We’re looking forward to continuing the conversation around data, data ethics and privacy, algorithms and the future of AI. Almost everything that can be digitized and automated…will be.

Resources: IBM  |  Accenture  |  Dataveristy  |  Brainy Quote  |  Ted

You may also like

FiVerity Hosts Two Secure Payments Executives from the Federal Reserve
FiVerity Hosts Two Secure Payments Executives from the Federal Reserve
24 June, 2021

On June 8th, FiVerity hosted an interview and panel discussion with Michael Timoney, Vice President of Secure Payments, ...

Synthetic Identity Fraud Now a Primary National Security Priority.
Synthetic Identity Fraud Now a Primary National Security Priority.
2 July, 2021

On June 30th, the US Financial Crimes Enforcement Network (FinCEN) released their AML Priorities to banks, credit unions...

A Call for Collaboration in the Fight Against Digital Fraud
A Call for Collaboration in the Fight Against Digital Fraud
4 May, 2022

A Call for Collaboration in the Fight Against Digital Fraud American Banker May 4, 2022 By Greg Woolf