AI Transparency

AI Usage & Responsibility

Transparency, accountability, and clarity on how we use AI technologies.

Effective: January 2026 | Last updated: January 2026

Our AI Philosophy

StackFoundry Labs follows a human-first, risk-aware, and compliance-driven approach to AI adoption. We do not build or deploy autonomous or high-risk AI systems.

AI As Assistive Technology

  • Improve engineering productivity
  • Accelerate documentation and analysis
  • Support decision-making with insights
  • Automate repetitive, low-risk workflows

👤Human-in-the-Loop Guarantee

  • AI does not make final decisions
  • Outputs are reviewed by humans
  • Clients control acceptance/rejection
  • Aligns with EU AI Act & GDPR

Types of AI Use

Where We Use AI

Documentation search & summarization
Code analysis and explanation
Log analysis & incident summarization
Release note & report generation
Workflow automation for engineering
Internal productivity tools

Where We Don't Use AI

Automated hiring or employee evaluation
Credit, financial, or insurance decisions
Medical diagnosis or treatment
Legal decision-making or advice
Surveillance or biometric identification
Any high-risk autonomous decisions

Data Handling & Privacy

AI systems are designed with privacy-by-design principles.

Data Minimization

Only necessary data is processed. Sensitive data excluded unless approved.

No Model Training

Client data is never used to train AI models. Inference mode only.

Data Retention

Prompts/outputs not retained beyond operational necessity.

Third-Party Providers

Limited scope, contractual safeguards, client approval where required.

Transparency & Disclosure

  • AI role is disclosed to clients and users
  • Scope and limitations clearly communicated
  • No deceptive or misleading AI claims
  • Not represented as replacement for professional judgment

Compliance Alignment

  • 🇮🇳India: DPDP Act 2023
  • 🇪🇺EU: GDPR & AI Act (risk-based)
  • 🇬🇧UK: UK GDPR & principles-based AI
  • 🇺🇸US: CCPA/CPRA & sector-specific

Our AI use cases fall under low-risk or minimal-risk categories.

Client Responsibilities

Clients are responsible for:

  • Final review and approval of AI outputs
  • Ensuring outputs are appropriate for their use cases
  • Providing lawful authority for data shared

Limitations & Disclaimer

  • AI may produce incomplete or inaccurate outputs
  • Outputs should not be treated as definitive
  • No guarantees regarding AI accuracy
  • Provided on a best-effort basis

Questions About AI Usage?

For questions regarding this policy or AI usage at StackFoundry Labs, contact us at contact@stackfoundrylabs.com

This policy may be updated as AI regulations evolve. Changes will be reflected with an updated date.

© 2026 StackFoundry Labs. All rights reserved.