Plain-English terms for using the AI features inside Shield.


1. Scope

This policy governs Shield's in-product AI agent (the "Agent"), which is included by default in new trials, and all future AI features.

2. Data handling

  • No model training on your content. Prompts, outputs, or underlying data are not used to fine-tune or train any model.
  • Product improvement & insights. We may inspect aggregated usage patterns or talk with customers to improve the Agent and to generate anonymized or aggregated analytics, benchmarks, or reports. Any such outputs will not identify you, your organisation, or your users. Raw customer content never feeds model training. For more on how we use anonymized/aggregated data generally, see our Privacy & Cookie Notice.
  • EU storage by default. Any customer data Shield stores (outside the model provider environment) sits on EU-based servers, like the rest of our customer and user data.

3. Model providers and the Model Context Protocol (MCP)

  • Prompt routing. When you use the Shield AI Agent, your entire prompt is sent directly to our current model providers — OpenAI and Anthropic, both located in the United States. These transfers rely on the EU–US Data Privacy Framework (DPF); if the framework or its adequacy decision is withdrawn, we will fall back to the 2021 EU Standard Contractual Clauses (SCCs) plus the safeguards in Annex IV.
  • How MCP works. MCP is an interface the model can choose to call when it needs extra facts. Instead of forwarding your whole prompt, the model passes only a minimal, structured request, e.g. {"orgId":"abcd12345", "search":"Andreas"}. MCP then fetches the requested data from Shield's EU-hosted API and returns it to the model. Many prompts are answered without any MCP call, and the additional context retrieved through MCP never leaves the EU.
  • No training or retention. Prompts, MCP requests, and outputs are not used to train any model, and logs are retained only for abuse monitoring and troubleshooting.
  • Changes. We will update the public sub-processor list and give 30 days' notice before adding any new model provider that will process customer prompts.

4. Security

Shield inherits encryption, access logs, and other controls from its cloud and model vendors (each SOC 2 / ISO 27001 certified). Shield doesn't yet hold its own certification.

5. Acceptable use

You must not:

  1. Break any law or regulation.
  2. Generate hateful, harassing, or discriminatory content.
  3. Provide or solicit medical, legal, or regulated financial advice.
  4. Facilitate weapons, surveillance, or other unethical activity.

Shield may suspend or terminate access for violations.

6. Fair-use & rate limits

To preserve quality for all customers, Shield may throttle or limit usage that is unusually high-volume or abusive. Repeated circumvention attempts may result in suspension.

7. Audit & logging

Shield and its model providers record prompt metadata (timestamps, token counts, error codes) for security, abuse monitoring, and analytics. These logs are retained for a reasonable period and are not used to train models.

8. Output caution & liability

AI output can be wrong, incomplete, or biased. You are responsible for verifying results before acting on them. The Agent is provided "as-is," with no warranties; Shield's total liability is limited by your primary SaaS agreement.

9. Impact assessments

Shield performs internal AI risk and impact assessments on new or materially changed AI features and updates controls in line with emerging regulations (e.g., EU AI Act).

10. Compliance regions

We adhere to:

  • GDPR (you are the Controller; Shield is the Processor).
  • CCPA/CPRA for California users.

We will revise this policy if stricter local AI laws arise.

11. Contact

Questions or data-subject requests: legal@shieldapp.ai.

12. Changes

We will post material updates at least 30 days before they take effect. Continued use after that date means you accept the changes.