Welcome! Let’s get started.

Please name your project.

Click a white circle to add an element.
Add at least 1 element to each layer to complete your stack.

Device layer

Add anything that describes the product itself. Is it software, hardware, medium-ware? What kind of a product is it (wearable? smart home industrial IoT?), what is inside the device (sensors, microprocessors, connection types)?

Dignity

Non-discrimination

Autonomy

Responsibility

Accountability

Sustainability

Safety & Security

Openness

Wellbeing

Transparency

Participation

Inclusion & Equality

You're done. If your answers created challenging ethical and social impact, you will find them below along with some support for reconsidering why the answer could create important ethical challenges. "Reconsider" is neither a grade, nor approval, nor disapproval. We bring to your attention the areas you can re-think and re-design to achieve better alignment with your values.
Choose one issue to focus on. Use the accompanying tools to work through the issue and come up with solutions.

Dignity

Does the device need to be continuously attached to or embedded into the user's body? You answered yes.

That could be an ethical challenge because implants have notable security and privacy vulnerabilities. Implants can also require a lot of user feedback or input in order for the overall service to continue to function. Is this feasible, in all physical and mental health conditions? Can the user regulate the implant with any control? Will you present alternatives to implants or is this the only way in which they can benefit from your service? Do they have any option to choose the frequency the implant collects their internal and external data?

Does the IoT device contain any sensors that send signals to a user's body, including but not limited to vibrations, buzzing effects, sounds? You answered Yes.

If the device sends signals, is the user given immediate and specific explanations about why s/he is receiving signals? Imagine that the user is constantly receiving these. How much stress, anxiety and distraction will it produce?

Is there a possibility of the device to interfere with a user's everyday life? You answered Yes.

Is the user able to turn off the functionality of the device, if it interferes with his/her life, without posing a health risk to herself/himself or to those around them? Imagine that the user can no longer move as they need to in moments of emergency. Will other body movement health issues be created by this interference?

Do you give the user sufficient explanation about the known benefits and risks of the device? You answered No.

Is the user informed about their rights, if the device malfunctions or s/he is adversely affected by it? Consider whether all users, irrespective of race, class, gender, sexuality, or disability, simply “occupy the role of the wellbeing expert” by possessing more data?

Will users be monitored in private areas such as bathrooms? You answered Yes.

Your answer may go against the basic definition of dignity: "The feeling of control over one’s own destiny that entails relationships of respect. Having a say in tracking, surveillance and control through IoT products. No individual or group should be adversely affected or dehumanised, as a result of using or not using a product. Reflecting on the implications of connectivity in spaces and contexts users might consider as private."

Will there be any spaces free of monitoring? You answered No.

Will you give users the ability to stop being tracked in the areas of their choosing? Will it be possible for the user to pause or stop being tracked, without any health risks or without risking their quality of life? If not, then your answer may go against the basic definition of dignity: "The feeling of control over one’s own destiny that entails relationships of respect. Having a say in tracking, surveillance and control through IoT products. No individual or group should be adversely affected or dehumanised, as a result of using or not using a product. Reflecting on the implications of connectivity in spaces and contexts users might consider as private."

Non-discrimination

Will the system take into account any particular characteristics of the users when making any determination, such as age, gender or disability? You answered Yes.

Is it possible to use your technology without entering any personal characteristics? If not, then ask yourself: how could this go wrong? If the system is taking into account these particular characteristics, are there users who will never be able to access the benefits of the system because they have been overly categorised? What about people that do not fit the categories, what will they have to do, to get the system to work for them? What if your system becomes used everywhere, what happens to people who do not fit your categories? What becomes the "wrong" thing to be?

Does your technology use any predictive algorithms or machine learning algorithms to predict future behaviour? You answered Yes.

Ask yourself: how could this go wrong? Could there be cases in which the prediction fails? How would that impact someone's life? Use the paper tool 'What If Everyone In The World' to identify risks with respect to the profiles you have created for your users. Consider redesigning with disclosure of the original data source that you have used to build or train your technology. Do you use any of the personal data you collect for profiling purposes? If so, consider the following example where China is using A.I. to profile a minority: in a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group. Amazon’s Rekognition software has been used by the police but returns inaccurate or biased results. This is made worse by the fact that Amazon claims there is an accuracy threshold that police should use for the use their system but the default is a much lower number and the police don't know that they should reset that threshold.

Do you have measures in place to avoid discrimination? You answered No.

If the way you collect, extract or use personal data be interpreted as discriminatory in any way, consider the implications for your data management and data analytics for the full variety of your users. Do you have any measures in place to overcome, mitigate and control for any risks of discrimination? Consider setting up a group to help you avoid discrimination with experts who understand why discrimination can occur through technology systems and how to mitigate that.

Autonomy

Will the device reduce individuals ability to make their own decisions? You answered Yes.

Are you respecting the individual's wisdom about their life and considering the impact of your device's decisions over the long term? Is there a way for the individuals to understand how the decisions are made if these are made for them?

Will others (including your company) be able to access to the data or the device remotely? You answered Yes.

If remote access is possible, will the users be prompted when others are remotely connected to their devices, even for research or testing purposes? Can the users prevent or stop others from being able to access their devices? Consider: who controls the remote control? What are the impacts of that control? Can it create damage to the user or those surrounding the user? If control is in the hands of a loved one, is it better to track your loved ones even though it may mean they are a) targeted or b) exposed in terms of their security or c) limited in their own definition of a good life? What about employees, dependents, people in unequal power relations?

Will the users have sufficient capability to object to using the product, if they so desire (e.g. workplace setting, where an employer hands out your devices)? You answered No.

Have you considered any risks that might arise from using your product due to inherent power asymetries? Have you considered any risks of control, abuse and domination that might be inflicted on people by using your product? Imagine that the powerful person in the relation makes a choice that the less-powerful user finds extremely problematic. How can they move out of that space that the device has now created? Are there ways to identify and challenge problematic situations and decisions through or with the device?

Responsibility

Will there be a way to challenge any decisions made by the system? You answered No.

Imagine if, for example, the user has a health issue and is no longer able to meet whatever thresholds the system expects. who has the power to set thresholds and how can people contest or adjust these?

Will there be clear lines of responsibility for any outcomes, particularly between the developers of the tools and the operators to ensure that any issues are always dealt with? You answered No.

How much of the system do you, the creators, have responsibility for? Has this been discussed and set out in legal terms? How can you be held to account by those that are harmed by your system?

Will there be a way to challenge any decisions on productivity, resource allocation or treatment made by the system? You answered No.

Is it possible that some users - depending on their unique characteristics as humans - have different ways of being "productive" than the ones that your device recognises? Who sets the premises on which your system bases allocation decisions? Who is affected and can they assess allocation principles and contest them?

Accountability

Will you be sharing personal data with third parties? You answered Yes.

Imagine: the personal data is for a fitness monitoring device which gives health insurance benefits. Some users would be required to then constantly wear the device, no matter their disability or unwillingness to constantly wear the activity monitor. Who is left out, made to feel inadequate? What types of behavior does your device make invisible or visible?

Have you set clear limits on what third parties or partners can do with that information? You answered No.

The cases of misuse of data by third parties are endless. In order to be accountable to your users, you must understand and be as specific as possible about how their data is allowed to be used. You also may want to be clear about who is responsible should misuse occur, how to assess whether misuse is occurring and how to contest the fact of misuse.

Will the device receive advertising messages from third parties? You answered Yes.

What are the impacts on the user of seeing or hearing these advertising messages over time? Will it be clear to the user that what they are seeing is targeted advertising?

Will it be possible for the user to enable and disable the different types of sensors (such as a microphone) in the device via a physical or digital switch? You answered No.

"There have been many cases of microphone sensors malfunctioning in the software and hardware. In that case, the microphone could be ""always on"" without the user knowing, which is a clear invasion of privacy. For example: Google Home Mini was randomly and near-constantly recording sounds in his home and transmitting them to Google. The company acknowledged the problem and is issuing a software update to resolve the issue, which appears to boil down to a failure of the touch sensor on the top. That seems to be the rub (pardon the pun) with the Mini: it thought that somebody was holding its finger down on the top and so was randomly activating and recording. The good news is that the lights turned on to indicate it was listening, but the bad news is that it didn’t make an audible tone, so it took Artem Russakovski a trip through the Home’s search history to discover the error. How can you make potentially privacy invasive actions of your device be as obvious as possible?"

Sustainability

Are the devices reusable? You answered No.

If sustainability is an important value, consider the after life of the product after the user has chosen to leave it. How will they be disposed of? What happens if the device breaks? Can it be repaired and refurbished or must it be consigned to the landfill? Is it clear what can be re-used and what can not be?

Will the servers providing remote functionalities keep functioning for the lifetime of the product? You answered No.

If the product depends on the servers, will the product become useless without the servers? How sustainable is the expectation of constant connectivity? Is there any functionality that does not depend on remote servers?

Safety & Security

Will the device receive software updates for the lifetime of the product? You answered No.

In order to maintain the security of the device and make sure it is not vulnerable to attack, you will need to update the device - at least through the pair app. This can be challenging as users may not take the time or effort to update their app in the app store. It is the responsibility of the maker to at least provide updates with security patches.

Is there a unique user name and password for each user/device? You answered No.

This is a key security practice.

Is there a point of contact to report security vulnerabilities? You answered No.

This is a key security practice.

Will you ensure the security of the data transmission? You answered No.

If that data was carelessly stored, and then stolen through a data breach by a malicious third party and sold to unscrupulous organizations that want to use that data to assess your health risks, you could one day face steep increases in health insurance, or even a policy cancellation. The risk of this is so real that some companies are buying data breach insurance to protect themselves in the case of consumer information getting into the wrong hands.

Can you ensure that vulnerable users are not reached by strangers using the device? You answered No.

If that data was carelessly stored, and then stolen through a data breach by a malicious third party and sold to unscrupulous organizations that want to use that data to assess your health risks, you could one day face steep increases in health insurance, or even a policy cancellation. The risk of this is so real that some companies are buying data breach insurance to protect themselves in the case of consumer information getting into the wrong hands.

Openness

Will the device allow for third party add-ons or user re-programming? You answered Yes.

How could this go wrong? What if third party add-ons are not in accordance with other values that you uphold, such as inclusion & equality? There can be cases where extremist viewpoints are added to "open" devices or platforms. How will you deal with that?

Will the software in the device be open source? You answered Yes.

How could this go wrong? What if third party add-ons are not in accordance with other values that you uphold, such as inclusion & equality? There can be cases where extremist viewpoints are added to "open" devices or platforms. How will you deal with that?

Wellbeing

Do you allow for comparisons among users? You answered Yes.

What if the comparisons drive competitive behavior to the extreme? Does that add to a sense of wellbeing? Check in with your definition of what wellbeing truly means in your company. Do you 'pay attention to the physical and mental welfare of the users and developers, designers and testers of the product'?

Transparency

When the device is recording video and/or audio, do you prompt the user that this is the case? You answered No.

"Is it transparent not to tell the user about when they are being surveilled by the device? What if other non-primary users are present and did not consent to being surveilled? Check in with the definition of transparency: ""Striving towards achieving clarity throughout the technology development process about the source of materials, hardware and data that goes into the product. It also entails clear communication of the source of funding for the product."" For example: Google Home Mini was randomly and near-constantly recording sounds in his home and transmitting them to Google. The company acknowledged the problem and is issuing a software update to resolve the issue, which appears to boil down to a failure of the touch sensor on the top. That seems to be the rub (pardon the pun) with Russakovskii’s Mini: it thought that somebody was holding its finger down on the top and so was randomly activating and recording. The good news is that the lights turned on to indicate it was listening, but the bad news is that it didn’t make an audible tone, so it took a trip through the Home’s search history to discover the error."

Do you have a clear communication strategy for letting users know about how you collect, store and analyse their data? You answered No.

"How can they understand the decisions made based on data processing if they do not know about the logic of the data processing? Check in with the definition of transparency: ""Striving towards achieving clarity throughout the technology development process about the source of materials, hardware and data that goes into the product. It also entails clear communication of the source of funding for the product."""

Do you have a clear communication plan for disclosing the details of your project to relevant stakeholders, including how you source materials for production, organise your team and the sources of your funding? You answered No.

Check in with the definition of transparency: "Striving towards achieving clarity throughout the technology development process about the source of materials, hardware and data that goes into the product. It also entails clear communication of the source of funding for the product."

Is there a way for the public to get information about the product or service? You answered No.

You should have clear plans for including the public/ users in your product development process; inform the users about how they can reach you to discuss the product, data or any other concerns or questions they might have. Check in with the definition of transparency: "Striving towards achieving clarity throughout the technology development process about the source of materials, hardware and data that goes into the product. It also entails clear communication of the source of funding for the product."

Participation

Do you have a plan in place for including stakeholders in your product development, design and launch? You answered No.

Participation takes into account effectively engaging data subjects in data processing design and promoting debate and dialogue. Consider creating stakeholder workshops where you open up about the project development in terms of the key areas to which they can contribute.

Inclusion & Equality

Will the device be used to potentially restrict services to users or groups that are deemed uneconomical? You answered Yes.

Limiting the device to only those who will bring profit calls into question the principle of being inclusive or equal.

Will data collected be used to influence socio-economic policies that may be detrimental to certain people, even if others benefit? You answered Yes.

"Can your product have any adverse affects on any vulnerable groups? Imagine if the policy were based on falsified data - either because some point in your system broke or because users found loopholes in the device and used those to their own benefit. For example: China is using A.I. to profile a minority: in a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group. Amazon’s Rekognition software has been used by the police but returns inaccurate or biased results."

What If Everyone In the World?

Work out the implications of your ethical challenges, assessing the roots of the challenge and ideating options to address the challenge.

Use when trying to understand ethical challenges and difficult decisions.
Takes 45 minutes to an hour.

Mapping options

Facilitate a discussion and evaluation of different ideas you came up with when working through the ethical challenges and/or WIEITW.

Use when assessing possible options.
Takes around 30 minutes

Reflect on your values

Checking in with your "North Stars" - the values that you had identified at the beginning of your ethical stack - and being realistic about how well you can align with them.

Use as a periodic check-in during the process of product development.
Takes around 20 minutes.

State

Gather your team to articulate your ethical values as individuals and as a group in relation to your product development.

Use at the beginning of product development.
Takes around 30 minutes.

Keep your work!

We don't save your data.
Print this page to keep a record of the state of your stack and continue to solve these ethical challenges.

What else?

Check out the tools for handling your challenges

Get some value clashes - even more ethical dilemmas for you to consider.

Learn about the project.

Beta Feedback

This is a prototype - a preliminary version of an experience from which other forms are developed.

We know a lot doesn't work - hence the beta prototype. If you enjoyed the experience or learned something from it, let us know.