inca AI FAQ

 

Humanity is still in the very early days of “artificial intelligence”, but nonetheless the impacts have and will continue to be profound. As with any powerful technology, there is a grave need for awareness and responsible usage, particularly considering its ethical consequences. To achieve Nexxt Intelligence’s mission of keeping humans at the center, we need to ensure that artificial intelligence is used in a manner that is transparent to all stakeholders, and that insights/data arising from methodologies involving artificial intelligence are systematically marked and easy to understand. However, our onus goes far beyond that: technologies leveraging artificial intelligence pose considerable risks to misrepresenting, misleading or even discriminating against people in a manner which can be impossible to effectively audit or measure, ultimately perpetuating existing inequalities. To ensure that inca remains a tool for good, our technological approach employs robust safeguards and quality assurance, and we are working continuously to improve our elements of explainable AI and human-in-the-loop (HITL) controls.

In the service of informing market researchers through greater transparency, ESOMAR has produced a checklist of 20 questions to help buyers of AI-based services for market research and insights. Here we are proud to present our answers to these questions!

Disclaimer: answers in this FAQ were written by the Nexxt Intelligence team with the assistance of GPT-4o.

Last updated: May 24, 2025.

  • Our company has a robust blend of experience and know-how in providing AI-based solutions for research, underpinned by a strong foundation in both market research and advanced AI technologies. Here’s a detailed overview of our expertise:

    Vision & Foundational Expertise

    Our solution is meticulously crafted by market researchers who specialize in applied R&D in natural language processing (NLP) and large language models (LLMs). Kathy Cheng, the founder of Nexxt Intelligence, brings decades of experience in qualitative market research. She envisioned that qualitative principles could be scaled to deepen and broaden insights from digital research, leveraging AI to achieve this vision. Our team, composed of seasoned market researchers, has been at the forefront of integrating conversational AI into insight platforms.

    Real-World Application and Iteration

    From the inception of our development journey, our AI solution, inca, has been rigorously tested in conversational studies conducted by enterprise and agency clients. Over time, these clients have evolved into key innovation partners. Each iteration of inca has been informed by real use cases and client feedback, ensuring that our solution is both practical and effective in real-world scenarios.

    Academic Contributions

    Our innovation is supported by our engineering and R&D teams, as well as esteemed academic collaborators, including the Waterloo Artificial Intelligence Institute, the University of Toronto, and the Vector Institute for Artificial Intelligence. Our cutting-edge research in LLMs and NLP has led to several peer-reviewed publications that formalize market research problems as machine learning tasks and provide practical guidance to practitioners, including:

    Industry Recognition

    Our expertise and innovative approach have been recognized by the industry through various accolades and publications:

    • Featured in leading industry publications (list available upon request)

    • Awarded Best Technology Innovation at the MRS Awards in 2023

    • Ranked among the Top 50 Most Innovative Suppliers by GRIT in 2023

    • Received the Innovation and Advancement in Marketing Research Award from the Canadian Research Insights Council in 2024

    In summary, our company combines deep market research expertise with advanced AI capabilities, validated through real-world applications, academic contributions, and industry recognition. This unique blend ensures that we deliver AI-based research solutions that are both innovative and grounded in practical, actionable insights.

  • Overall Impact

    AI-based services have the potential to revolutionize the field of market research by enabling deeper human understanding through the integration of quantitative and qualitative research methodologies. This fusion allows for "light qual" AI-led research at scale, providing researchers with new opportunities to address several key pain points:

    - Data Quality and Depth: AI can enhance the richness and accuracy of data collected, leading to more nuanced insights.

    - Resource Constraints: AI can automate repetitive tasks, freeing up human researchers to focus on more strategic activities.

    - Scalability Constraints: AI enables the handling of large datasets efficiently, making it feasible to conduct extensive research without proportional increases in time and cost.

    - Timeliness: AI accelerates the research process, allowing for quicker turnaround times and more timely insights.

    Through features like SmartProbe and AI Coding, inca not only makes the research process more efficient but also enriches the depth and quality of the data collected, ultimately leading to more actionable insights and evidence-based decision-making.

    Enhanced Data Collection with SmartProbe

    One of the standout features of our AI solution is SmartProbe, which significantly enhances data collection by engaging survey participants with follow-up questions based on their initial responses. This leads to:

    - Increased Engagement: Participants are more likely to stay engaged when they feel their responses are being actively considered.

    - Deeper Insights: Follow-up questions prompt more detailed and thoughtful responses, enriching the data collected.

    - Information Elicitation: SmartProbe effectively draws out more specific and actionable insights from participants.

    For example, in a study on opinions about the three main political party leaders in Canada, the average word count per response increased from 5 to 14 words. Additionally, twice as many participants identified significant differences between the leaders, and three times as many identified specific strengths driving voting intentions.

    Faster, Cheaper, and Systematized Coding with AI

    Our AI Coding feature offers substantial benefits in terms of speed, cost, and consistency:

    - Efficiency: AI can code verbatims faster and more cost-effectively than traditional methods.

    - Systematic Application: Using our "saved codeframes" feature, researchers can build robust codeframes that can be systematically applied across multiple datasets, ensuring consistency and reliability.

    - Learning and Adaptation: In an upcoming release, AI Coding will learn from human researchers, allowing organizations to effectively teach specialized models that distill organizational knowledge and processes. This feature will enable the creation of highly customized and accurate coding frameworks tailored to specific research needs.

  • SmartProbe

    • Asking the Right Questions: one of the key challenges we faced was training SmartProbe to ask good open-ended questions, which is anything but trivial. Through extensive real-world testing on diverse projects and contexts, combined with our in-house market research expertise, we have effectively trained SmartProbe to emulate professional interviewers. Additionally, our real-time quality assurance system acts as a second pair of eyes, reviewing candidate questions generated by SmartProbe to ensure high-quality and relevant queries.

    • Putting Humans in the Center: teaching an AI to be an effective interviewer can be challenging. To address this, we introduced our “trained model” feature for SmartProbe, allowing researchers to easily steer its behavior based on their own examples. This ensures that the AI aligns closely with human expertise and expectations.

    • Multi-Turn Probing: while asking multiple rounds of questions can sometimes annoy participants, we have identified strong use cases for complex AI-led conversations through collaboration with our innovation partners. This approach ensures that multi-turn probing is used judiciously to enhance data quality without compromising participant experience.

    • Increasing Engagement: engagement is crucial for high-quality data collection, and it’s becoming increasingly difficult to maintain. Fortunately, engagement is core to inca’s mission. Our suite of tools, including projective techniques, SmartProbe dynamic engagement, and upcoming Voice+ features, makes it easier for participants to share their thoughts and feelings.

    • Quality Control: ensuring high-quality and relevant information from survey participants is challenging. SmartProbe’s demerit AI feature helps flag low-effort responses, and our new multi-turn API (available in July 2024) will bring researchers into the loop with flexible quality controls. This helps maintain data integrity and relevance.

    • Regulated Industries: certain regulated industries have substantial concerns with generative AI solutions. To address this, we introduced template-based probing and custom quality assurance criteria for SmartProbe, ensuring that the AI adheres to industry-specific requirements and constraints.

    AI Coding

    • Analyzing Conversations: conversational data is often informal, high-context, and nuanced. We have explored various approaches and refined our technology to leverage LLMs and clustering techniques for efficient, high-confidence coding of conversational data.

    • Limitations of Fixed Ontologies: fixed ontologies are often insufficient for market research, rarely providing clear paths to actionable insights. We advocate for ad-hoc, context-specific, and adjustable codeframes. Our AI Coding surpasses the limitations of fixed ontologies, and our saved codeframes feature allows for robust coding tailored to specific needs.

    • Flawed Codeframes: inconsistent and non-exhaustive codeframes can lead to poor coding accuracy and numerous uncoded verbatims. Our AI Coding codeframe generation feature follows principles to ensure systematically consistent codeframes. Human-in-the-loop controls allow researchers to interact with AI Coding during codeframe generation, creating comprehensive and cohesive frameworks.

    • Analyzing Verbatims in Native Languages: currently, AI Coding uses machine translation to translate non-English verbatims to English before analysis. While this has worked well, we have seen promising results from using multilingual models to analyze verbatims in their original language. Native language AI Coding will be available in a future release.

    What Has Worked Well

    • SmartProbe's Real-Time QA System: Ensures high-quality questions, increasing engagement and data richness.

    • Human-in-the-Loop Features: Allow researchers to guide and refine AI behavior, ensuring alignment with human expertise.

    • Increased Engagement Techniques: Projective techniques and dynamic engagement tools have successfully maintained participant interest and data quality.

    • Efficient AI Coding: Our hybrid approach using LLMs and clustering techniques has proven effective for coding conversational data.

    Keeping Humans in the Center

    Our mission to keep humans at the center has kept us grounded while building inca, and has manifested in the following lessons:

    • The Researcher is (Almost) Always Right: inca is designed to serve market researchers, making it easy for them to steer or revise the AI’s behavior. Features like “trained models” for SmartProbe and revision capabilities for AI Coding ensure that researchers can guide the AI effectively.

    • Language is Inherently Subjective: Language models mimic human language, which is subjective. This means that even factual statements can be interpreted subjectively, and AI models cannot be entirely unbiased.

    • Human-in-the-Loop is Essential: Models are never perfectly aligned with humans, so integrating human oversight is crucial for maintaining alignment and quality.

    • Caution with Synthetic Data: Language models are not necessarily representative or pluralistic, which poses risks when using synthetic data. We apply such data judiciously to avoid biases and inaccuracies.

    Commercialization Challenges

  • Nexxt Intelligence uses AI to make market research more efficient, engaging, and insightful. In simple terms, our AI features help researchers ask better questions, get more detailed answers, and analyze data faster and more accurately. This makes the entire research process more efficient and effective, leading to better insights and more informed decision-making. By leveraging AI, we aim to make market research not only easier for researchers but also more engaging and valuable for participants.

    inca currently has three main AI features:

    SmartProbe

    What It Does:

    • Engages Participants: SmartProbe interacts with survey participants by asking follow-up questions based on their initial responses. This helps to gather more detailed and meaningful answers.

    • Real-Time Analysis: As participants respond, SmartProbe uses AI to analyze their answers in real-time, ensuring that the questions asked are relevant and insightful.

    • Quality Control: SmartProbe includes tools like "target detection" to identify key topics and "demerit classification" to flag low-quality responses, ensuring that the data collected is of high quality.

    Key Benefits:

    • Deeper Insights: By asking follow-up questions, SmartProbe helps to uncover more detailed and nuanced information.

    • Increased Engagement: Participants feel more engaged as the AI tailors questions to their responses, making the survey experience more interactive.

    • Efficient Data Collection: SmartProbe automates the process of probing for more information, saving researchers time and effort.

    AI Coding

    What It Does:

    • Automates Data Analysis: AI Coding helps researchers analyze large amounts of textual data quickly and accurately. It can automatically categorize responses into different themes or topics.

    • Creates Codeframes: Researchers can create "codeframes" (categories for organizing data) and use AI Coding to apply these consistently across all responses.

    • Additional Features: AI Coding also provides summaries and sentiment, and we are continuing to add new features regularly.

    Key Benefits:

    • Faster Analysis: AI Coding speeds up the process of analyzing textual data, allowing researchers to get insights more quickly.

    • Cost-Effective: Automating data analysis reduces the need for manual coding, saving time and resources.

    • Consistent Results: AI Coding ensures that data is categorized consistently, improving the reliability of the analysis.

    Voice+

    We leverage AI to better engage with and understand survey participants, with the following features:

    • Capturing Voice / Video Input: transcripts are generated in real-time and can be probed on and analyzed later.

    • Multimodal AI: probe on uploaded images or videos in real-time, and get useful insights from them during analysis.

    • Multimedia Insights: include voice & video recordings in your inca analytics.

  • Our primary expertise lies in conversational AI, specifically in textual processing and conversational dynamics. All our conversational AI capabilities are developed using a hybrid approach that combines large language models (LLMs) with other in-house natural language processing (NLP) techniques. This purpose-built design ensures that our AI is:

    • Fine-Tuned for Market Research: Our AI is specifically tailored to handle market research conversations, making it highly effective in this domain.

    • Robust and Secure: We have built-in guardrails to mitigate potential risks, ensuring that our AI operates safely and reliably.

    AI Subprocessors

    We rely on the following subprocessors to provide our AI features:

    • Amazon Web Services (AWS)

    • Google Cloud Platform (GCP)

    • Microsoft Azure

    • OpenAI

    • Deepgram Inc.

    • Zoho

    Nexxt Intelligence does NOT allow any of its AI subprocessors to retain customer data for more than 30 days nor to use it for anything other than abuse monitoring.

    Please note that Nexxt Intelligence also uses other LLM providers for R&D.

  • SmartProbe and AI Coding leverage OpenAI’s GPT family of models, including GPT-3.5, GPT-4, and GPT-4o. Details about the datasets used to pretrain these models have not been made public.

    Nexxt Intelligence adapts and incorporates these models by further leveraging our market research knowledge base which consists of data that has been collected (or shared) by our customers and further annotated by our in-house market research experts. Data in the market research knowledge base has been stripped of any personally identifying information, and corresponds to a large variety of different research contexts, domains, industries, languages, demographics, methodologies, etc.

    Nexxt Intelligence encourages clients to share their data for the purpose of “Learned Elements”, however clients are free to opt out in which case Nexxt Intelligence will never use your customer data for training data (nor model evaluation), and the data retention period can be limited to as little as 30 days.

    Nexxt Intelligence has agreements in place with all data subprocessors that they may NOT access the data for anything aside from abuse monitoring.

    Nexxt Intelligence leverages synthetic data (generated from LLMs) to simulate SmartProbe conversations. This data is used exclusively for the purpose of human-in-the-loop review and feedback, ensuring that market research experts have the opportunity to see SmartProbe in action and adjust its behaviour before using it with real survey participants. Other usages of data augmentation include machine translation of datasets into other languages, which is used as part of our AI Coding continuous improvement pipeline.

  • Ensuring the accuracy, validity, and reliability of AI-generated outputs is crucial for delivering high-quality market research insights. Here’s an overview of our processes to verify and validate the output, handle biases, and ensure that results are fit for purpose.

    SmartProbe

    SmartProbe’s performance is evaluated based on four key metrics, building on the sensibleness and specificity average (SSA) introduced in "Towards a Human-like Open-Domain Chatbot" (Adiwardana et al., 2020):

    1. Sensibleness: Does the question make logical and contextual sense within the current conversation?

    2. Specificity: How specific is the question in eliciting information pertinent to the market research objectives?

    3. Alignment: Does it follow the user’s instructions and contextual information? This relates to SmartProbe features such as training examples, conversational targets, or question objectives.

    4. Professionalism: Does it adhere to market research principles and ethics? For example, this includes not asking questions that are close-ended, leading, toxic/incendiary, or which elicit personal information.

    In addition, SmartProbe provides the following features to ensure quality assurance:

    • Real-Time Quality Assurance: this system reviews candidate questions generated by SmartProbe, ensuring they meet the above criteria before being presented to participants.

    • Human-in-the-Loop: Researchers can steer SmartProbe’s behaviour through its various features, allowing for human oversight and adjustments.

    AI Coding

    Codeframe Generation: currently, our in-house market researchers manually evaluate generated codeframes to ensure they are comprehensive and relevant. We are developing automated metrics to evaluate the quality of generated codeframes systematically.

    Verbatim Coding: For multi-label classification, we employ a custom F-score to quantitatively assess the model’s accuracy in predicting the correct codes for our evaluation dataset, which is a subset of our market research knowledge base.

    Training datasets are carefully selected and screened to ensure they are up-to-date and representative of the environment in which the system will be deployed. This helps in minimizing biases and ensuring the relevance of the AI’s outputs. We use a diverse range of data sources to train our models, ensuring that they are exposed to a wide variety of contexts and scenarios.

  • Understanding the risks and limitations of our AI models is crucial for ensuring the reliability and validity of the results they produce.

    SmartProbe

    The primary risks involved with SmartProbe relate to asking inappropriate questions (including generating questions which are leading, or elicit sensitive information, or repeat/confirm toxic statements), or classifying the survey participant in an unfair or discriminatory manner.

    SmartProbe is designed with guardrails to prevent the generation of leading questions. An online quality assurance system filters out inappropriate probes, while an offline system checks for consistency and prevents model drift. SmartProbe acknowledges but does not agree with controversial or toxic statements, steering conversations towards neutral topics. SmartProbe also provides human-in-the-loop controls to further steer and constrain the type of probe questions which are asked.

    SmartProbe’s demerit confidence and target detection features may be used as metrics of data quality, and in turn may be used to assess and possibly disqualify a survey participant from a survey. It is important to ensure that such decisions are not made on a discriminatory basis, and Nexxt Intelligence advises users to incorporate additional checks and balances to ensure that demerit confidence is not the sole factor used to assess participants. In addition, it offers features such as the incorporation of research context to ensure that the confidences are meaningful in the current context; and the demerit confidence model is subject to continuous re-calibration to ensure that it provides fair and equitable scores to participants regardless of their demographics.

    AI Coding

    The primary risks involved with AI Coding relate to the perpetuating biases during analysis by overlooking minority opinions and miscoding verbatims according to prejudiced biases. There is also a risk with generating toxic codes, when analyzing datasets which contain toxic texts.

    Both these risks are mitigated by the fact that AI Coding is offered as a tool to make coding more efficient, but which requires human review and revision. Nexxt Intelligence recommends that AI Coding outputs are never directly used to inform decision-making, so as to ensure appropriateness and accuracy.

  • Aside from the ethical considerations which have been baked into our applications for inca users (for that, see the question regarding ethical principles), there are three very serious ethical issues which are crucial to acknowledge and address: (1) concerns related to intellectual property infringement and nonconsensual usage of publicly-available data for the training of commercial AI models; (2) the risk of wealth created through artificial intelligence to perpetuate existing inequalities; and (3) the environmental impact of machine learning and related computational resources.

    Considering these issues, it is humbling to think that a market research technology company might have any significant impact. Nonetheless, we recognize our responsibility to strive toward more equitable futures, and believe that we have an onus to work towards mitigating these consequential ethical issues. Although there is no silver bullet, our commitment to supporting open-source LLM research and our goal to work towards serving in-house small language models (perhaps distilled from larger models) with curated training datasets will help to address these deep ethical issues and are paramount to ensuring that inca remains a tool for good.

  • inca shows a message to inform survey participants that they are interacting with an AI-led survey. For researchers, inca is clearly branded as an AI tool, in particular the SmartProbe and AI Coding features.

  • Yes, we have explicitly defined ethical principles for our AI-driven solutions, and these principles play a crucial role in determining the AI's behavior. Here’s how we ensure that human-defined ethical principles are the governing force behind our AI-driven solutions:

    SmartProbe

    SmartProbe is designed to adhere to ethical principles that align with market research industry standards. While the specific principles are considered proprietary, they are based on our best understanding of what constitutes an effective and ethical market research interviewer. These principles include:

    • Non-leading Questions: Ensuring that questions are open-ended and do not lead participants towards a specific answer.

    • Avoiding Sensitive Topics: Ensuring that the AI does not probe into areas that are sensitive or inappropriate.

    • Neutral Acknowledgment: Acknowledging but not agreeing with controversial or toxic statements, steering conversations towards neutral topics.

    • Respect for Privacy: Avoiding questions that elicit personal information unless explicitly required and consented to.

    AI Coding

    For AI Coding, the ethical considerations are slightly different since the outputs are close-ended and intended for human review. We do not advise any decision-making that relies solely on AI Coding without an informed human-in-the-loop. This ensures that ethical principles are upheld through human oversight.

    Practical Implementation of Ethical Principles

    Production Models:

    • Constitutional AI Techniques: We have explored various techniques adapted from constitutional AI to instil ethical principles into SmartProbe. This ensures that the AI behaves in a manner consistent with our ethical guidelines.

    • Real-Time and Offline Quality Assurance: We employ both real-time and offline quality assurance systems to ensure the model produces high-quality and ethically aligned outputs. These systems continuously monitor and review the AI’s performance.

    • Human-in-the-Loop (HITL) Controls: Our solutions incorporate robust HITL controls, allowing market research experts to steer and refine the AI’s behavior. This ensures that human judgment and ethical considerations are always part of the decision-making process.

    R&D Process:

    • Human Validation and Checking: Human researchers validate and check the outputs produced by the AI, ensuring that they meet ethical standards and are appropriate for decision-making.

    • Continuous Improvement: Feedback from human researchers is used to continuously improve the AI models, reinforcing ethical behavior and reducing biases over time.

    AI Strategy:

    • Transparency: We maintain transparency about the ethical principles guiding our AI solutions and the measures we take to uphold them.

    • Regular Reviews: Our AI models undergo regular reviews to ensure they remain aligned with ethical principles and industry standards.

    • Stakeholder Involvement: We involve stakeholders in the process of defining and refining ethical principles, ensuring that our solutions meet the expectations and requirements of all parties involved.

  • Ethical AI bias within Nexxt Intelligence's system, particularly in the inca platform, is addressed through several comprehensive strategies:

    1. Human-in-the-Loop (HITL) Controls: The system incorporates robust human-in-the-loop controls, allowing human researchers to oversee and guide the AI's behavior. This includes the ability to modify, reject, or refine AI-generated content and decisions, ensuring that human judgment plays a critical role in the final outputs.

    2. Training and Fine-Tuning: SmartProbe and other AI functionalities are trained and fine-tuned by a team of market research and natural language processing experts. This training includes specific instructions to avoid generating leading questions, introducing bias through question framing, or making value judgments.

    3. Built-in Guardrails: The AI systems are equipped with built-in guardrails designed to prevent the generation of inappropriate or biased content. These include mechanisms to avoid mentioning brand names inappropriately, misrepresenting user statements, or steering conversations in a biased manner.

    4. Quality Assurance Systems: Nexxt Intelligence employs both online and offline quality assurance systems to continuously monitor and assess the AI's outputs. These systems help filter out potentially inappropriate or biased content and ensure the AI's features continue to function as expected, accounting for model drift and other changes.

    5. Customization and Contextualization: The platform allows for significant customization and contextualization of AI functionalities according to specific project needs. This includes setting specific targets, training models with particular datasets, and incorporating research context to ensure the AI's outputs are aligned with the researcher's expectations.

    6. Ethical Guidelines and Recommendations: Nexxt Intelligence provides guidelines and recommendations for using AI outputs, such as demerit classification and target detection. These guidelines help researchers understand the limitations and potential biases of AI outputs and recommend strategies to mitigate these biases.

    7. Regular Audits and Bias Testing: The system includes testing mechanisms to ensure parity between analogous inputs in different languages and among different demographic groups. This helps to reduce the risk of perpetuating existing biases and ensures equal treatment of all survey participants.

    8. Transparent Reporting of AI Decisions: For critical decisions influenced by AI, the system is designed to provide explanations or rationales for the AI’s outputs. This transparency helps users understand the basis of AI-generated insights and ensures that these outputs can be critically assessed and validated. This is facilitated through information exposed directly through the user interface / data export, information which is available upon request, and technical papers we have published regarding how inca's AI models operate, the data they are trained on, and the limitations of these models.

    By integrating these strategies, Nexxt Intelligence aims to handle ethical AI bias effectively, ensuring that its systems are as fair as possible while still providing valuable insights for market research and acknowledging that market research experts are best empowered to identify and mitigate bias when AI systems are made transparent, interpretable, and auditable.

  • inca leverages large language models, which are pretrained on a vast amount of textual data which is results in an LLM having outputs which are substantially biased towards the data of humans who are disproportionately represented on the internet. As such, it is trivial to assess that the data used to pretrain LLMs is actually insufficient insofar as it represents the research objectives that market researchers are concerned with.

    Instead, Nexxt Intelligence has curated our own market research knowledge base and adapted the LLMs which we leverage so as to use that knowledge base in-context in order to generate outputs which are relevant and meaningful to market researchers. Moreover, by assessing the outputs using our human-annotated knowledge base, we can be assured that our models are aligned with market researchers’ expectations even if the underlying datasets used for pretraining are ultimately insufficient.

    When applying inca’s AI capabilities to various demographics, market researchers are provided with human-in-the-loop controls to steer the behaviour of SmartProbe or AI Coding, ensuring that the AI’s outputs are aligned with the examples that the human expert has provided.

  • Our R&D pipeline starts by verifying with our client database to confirm whether a particular client has consented to their customer data being used for learned elements. Thus, all data fed into our R&D pipeline is guaranteed to have been approved for usage as training data.

    Survey data is logically segmented in inca’s databases first at the client-level and then sub-client level (such as project or AI Coding instance), and is always stored separately from PII. Customer data is never exposed to other customers. As such, we do not make our training nor evaluation datasets publicly available.

  • Nexxt Intelligence complies with GDPR and other global privacy legislation. All clients must sign a data processing agreement (DPA) before using inca.

    Nexxt Intelligence does not allow processing of personally identifying information via the SmartProbe API

    On inca platform, the following personally identifying information may be collected:

    • Account details for client personnel: this includes name, email address, job title, and IP address. It is stored in protected servers, and is deleted when the user is removed.

    • Personal information elicited from survey participants: personally identifying information may only be elicited using a special “PII” question type. Before gaining access to the PII question type, clients must revise the DPA, and perform a data protection impact assessment (DPIA) and other required processes before collecting any data. PII collected from survey participants is stored securely, is only accessible to users with elevated permissions, and is retained for a limited period of time.

    For more information, please contact our Data Protection Officer, Josh Seltzer (josh@nexxt.in)

  • Nexxt Intelligence is working towards attaining ISO-27001 certification in 2025. For testing resilience to adversarial attacks for AI models specifically, our team makes use of guidance provided for red-teaming for LLM by the https://www.giskard.ai team.

  • Data ownership, usage permissions, and intellectual property are fully clarified in our Master Service Agreement. The customer retains full ownership of Customer Data, which includes both the inputs and outputs for any inca project (including the questions generated by SmartProbe, as well as the analytics outputs such as coded verbatims). The AI models and other software which constitutes the inca platform and affiliated services is considered the intellectual property of Nexxt Intelligence, and is licensed to customers on a non-exclusive basis.

  • Nexxt Intelligence reserves the right to make use of anonymized Customer Data for the purpose of Learned Elements, i.e. learnings that we can apply to our systems to improve our service offering. Such data is stored in a secure manner and is not exposed to other customers. Nexxt Intelligence’s subprocessors do NOT have permission to use Customer Data for Learned Elements. Customers who wish to opt out of the usage of their data for Learned Elements may do so when signing the Master Service Agreement.

    All data is processed and stored securely in the United States by default. Data storage in other regions is available, but may incur additional costs depending on the specific region and service.

  • Yes: our Master Service Agreement makes clear that customers own the Customer Data, which includes the output. Please refer to question #18 for more details.