Survalyzer's AI services

Survalyzer's AI services

Survalyzer’s Ethical AI mission

Survalyzer pursues a responsible approach to the development and use of AI systems. This is based on the Responsible AI (RAI) programme, which is based on six principles: Fairness, Reliability & Security, Privacy & Security, Inclusion, Transparency and Accountability. These principles are anchored in the survalyzer Responsible AI Standards, a comprehensive guide with requirements, practices and governance approaches that ensure that AI systems are developed and used in a human-centred way.

Introduction

This article explains important information about the application of the General Data Protection Regulation and the use of Artifical Intelligence Services in Survalyzer. When utilizing Survalyzers Artifical Intelligence Services for analyzing open-text responses in surveys, compliance guidelines are paramount to ensure the ethical handling and privacy of data. These guidelines involve adhering to data protection laws, such as GDPR in Europe

 

Areas of application of AI technology in the Survalyzer survey software

AI technologies are used in the Survalyzer software for the following areas:

  • For sentiment analysis of open-ended responses

  • For the categorisation of open-ended responses

  • For the translation of open-ended responses

  • For the translation of questionnaires and analysis dashboards

  • For more complex statistical procedures in the driver analysis

The context of an open question and the type of analysis defines whether it is a personal attribute. The determination must therefore always be made on a case-by-case basis.

Location and type of data processing when using AI services from Survalyzer

Survalyzer uses the AI Services on MS Azure. All data is therefore processed in the MS Azure data centers. Public Application Security Space

  • The AI services of MS Azure also use the algorithms of Open AI. However, the data is processed exclusively in the MS Azure data centers. If a Survalyzer customer has selected the data location West Europe, data is processed exclusively in the European Union by existing sub-processors.

  • MS Azure has no rights to the data entered and may not use it for its own models

 

The AI Act

About the AI Act of the European Union

The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.

The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).

The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package and the Coordinated Plan on AI. Together, these measures will guarantee the safety and fundamental rights of people and businesses when it comes to AI. They will also strengthen uptake, investment and innovation in AI across the EU.

The AI Act is the first-ever comprehensive legal framework on AI worldwide. The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

 

Risk-based approach of the AI Act

The Regulatory Framework defines 4 levels of risk for AI systems:

Unacceptable Risks

All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.

High Risk

AI systems identified as high-risk include AI technology used in:

  • critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;

  • safety components of products (e.g. AI application in robot-assisted surgery);

Limited risk

Limited risk refers to the risks associated with lack of transparency in AI usage. The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust.

Minimal or no risk

The AI Act allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

Risk evaluation regarding the survey data collected with Survalyzer

An initial risk analysis carried out by Survalyzer shows that data collected via Survalyzer and analysed with AI falls into the category of minimal or no risk.

In individual cases, this general analysis may not apply and further detailed risk analyses are necessary

 

Further resources

https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext&tabs=azure-portal

https://learn.microsoft.com/en-us/compliance/assurance/assurance-artificial-intelligence

https://learn.microsoft.com/en-us/azure/security/fundamentals/shared-responsibility-ai

https://learn.microsoft.com/en-us/azure/ai-services/language-service/custom-text-classification/overview

https://learn.microsoft.com/en-us/azure/ai-services/language-service/overview

© Survalyzer AG