Skip to main content

AI Policy

Last updated: March 2026

1. Our Commitment

Torinit Technologies Inc. ("Torinit") is an AI consulting firm that builds and deploys intelligence systems within wholesale distribution environments. Our work involves accessing, analyzing, and building models from sensitive business data — transaction records, pricing data, customer information, inventory positions, and operational metrics.

We recognize that the trust our clients place in us is inseparable from how we handle their data, how we build and deploy AI systems, and how we ensure that the intelligence we create serves the client's interests — not ours.

This AI Policy sets out the principles and practices that govern how Torinit develops, deploys, and manages AI systems across all engagement types: AI Consulting, AI Studio, Co-Innovation, and our product Monaro.ai.

2. Core Principles

2.1 Client Data Sovereignty

Client data belongs to the client. Torinit accesses client data solely to deliver the contracted engagement. We do not use one client's data to benefit another client. We do not aggregate client data across engagements to build proprietary datasets. We do not retain client data beyond the engagement unless the client has explicitly authorized continued retention under defined terms.

2.2 No Unauthorized Model Training

Torinit does not use client data to train general-purpose AI models, foundation models, or any model intended for use beyond the specific client engagement — unless the client has provided explicit written authorization. This applies to all engagement types:

  • AI Consulting: Models built during the engagement are trained on the client's data, for the client's use, and the client owns all resulting IP including the trained models.
  • AI Studio: Torinit may develop proprietary models and products using patterns, methodologies, and anonymized architectural learnings from engagements — but never using identifiable client data, transaction records, pricing data, or customer information.
  • Co-Innovation: Data usage terms are defined in the co-innovation agreement and the operating terms of any co-owned entity.

2.3 Transparency

We are transparent with our clients about:

  • What data we access and why
  • What models and algorithms we build
  • How those models make decisions or generate recommendations
  • What the limitations of those models are
  • How the models are monitored after deployment

We do not deploy opaque systems. Every intelligence system Torinit builds includes documentation of its inputs, logic, limitations, and the conditions under which it should and should not be relied upon.

2.4 Human Authority

Torinit's intelligence systems are designed to inform human decisions, not replace them. Every system we build includes a human review point where the appropriate person — the estimator, the counter person, the branch manager, the purchasing director — reviews, adjusts, and approves the system's output before it becomes a business action.

We do not build fully autonomous systems that execute business decisions without human oversight. The human in the loop is not a formality — it is a design requirement.

2.5 Fairness and Non-Discrimination

We design AI systems that do not discriminate against individuals or groups. In the context of wholesale distribution, this means:

  • Pricing intelligence systems do not recommend pricing based on protected characteristics of individual purchasers.
  • Customer intelligence systems do not make account-level recommendations based on the demographic characteristics of account contacts.
  • All systems are designed to improve the quality and consistency of business decisions — not to exploit information asymmetries against customers, suppliers, or partners.

3. Data Handling Practices

3.1 Data Access

During consulting engagements, Torinit may access data from client systems including ERP systems (Epicor, Infor, SAP, DMSi, Karmak, and others), CRM platforms, warehouse management systems, and other operational tools. Access is:

  • Limited to the data necessary for the contracted engagement scope
  • Granted through client-controlled credentials and access mechanisms
  • Logged and auditable
  • Revocable by the client at any time

3.2 Data Processing Environment

Client data is processed in secure, isolated environments. Torinit maintains:

  • Logical separation between client environments
  • Encryption of client data in transit and at rest
  • Access controls limiting data access to the specific engagement team members authorized by the client
  • Audit logging of all data access and processing activities

3.3 Data Retention and Disposition

Upon completion of a consulting engagement:

  • Client data is returned to the client and/or deleted from Torinit systems within the timeframe specified in the engagement agreement (default: 90 days).
  • Trained models and deliverables are transferred to the client (AI Consulting) or retained by Torinit (AI Studio) per the engagement agreement.
  • Torinit may retain anonymized, aggregated statistical information that cannot be traced to any individual client — but never identifiable client data.
  • Clients may request confirmation of data deletion.

3.4 Monaro.ai Product Data

Monaro.ai processes mechanical construction drawings to generate material takeoffs. Regarding data processed through Monaro:

  • Drawings uploaded to Monaro are processed to generate BOMs and are stored in accordance with the customer's subscription agreement.
  • Drawing data may be used to improve Monaro's AI models unless the customer opts out.
  • Customer pricing data, account information, and business-specific configurations are never used for model training and are never accessible to other Monaro customers.
  • Customers may request deletion of their uploaded drawings and associated data at any time.

4. AI Development Practices

4.1 Model Development

When Torinit builds AI models during consulting engagements, we follow these practices:

  • Models are developed and tested using the client's data within the client's engagement environment.
  • Model performance is validated against the client's specific operational context.
  • Model limitations are documented, including known failure modes, edge cases, and conditions where the model's output should be treated with increased scrutiny.
  • Models are designed to degrade gracefully — when the model encounters inputs outside its training distribution, it signals uncertainty rather than producing confident but incorrect output.

4.2 Testing and Validation

Before any model or intelligence system is deployed to production:

  • It is tested against historical data to validate accuracy.
  • It is reviewed by the client's subject matter experts.
  • Edge cases and failure modes are documented and shared with the client.
  • A pilot period is conducted where the system's recommendations are reviewed but not acted upon.
  • Acceptance criteria are defined collaboratively with the client.

4.3 Monitoring and Maintenance

Deployed intelligence systems require ongoing monitoring. Torinit provides:

  • Documentation of what to monitor (input drift, output quality, edge case frequency).
  • Alerting recommendations for conditions that indicate the model may need retraining.
  • Guidance on when and how to update models as the client's business evolves.
  • Support terms defined in the engagement agreement for post-deployment model maintenance.

5. Responsible AI Governance

5.1 Internal Governance

Torinit maintains internal practices:

  • Engagement teams include both AI/ML engineers and domain experts with distribution industry experience.
  • All client-facing AI systems undergo internal review before deployment.
  • Data access for each engagement is formally scoped, approved, and logged.
  • Team members receive ongoing training on responsible AI practices, data privacy, and industry-specific regulations.

5.2 Client Governance Support

We support our clients in establishing their own AI governance:

  • We provide documentation that enables the client's leadership to understand what the AI system does, how it works, and what its limitations are — in business terms, not technical jargon.
  • We recommend governance structures for managing AI systems post-engagement.
  • We never deploy a system that the client's team cannot understand, manage, and if necessary, override or disable.

6. Third-Party AI Tools

Torinit may use third-party AI tools and platforms in the course of developing solutions. When we do:

  • Client data is only processed through third-party tools in accordance with the client's engagement agreement and data security requirements.
  • We evaluate the data handling practices of any third-party AI tool before using it with client data.
  • We do not submit identifiable client data to general-purpose AI services without explicit client authorization.
  • Our use of third-party tools is documented and available for client review.

7. Distributor Intelligence Score (DIS) Assessment

The DIS assessment at score.torinit.com uses algorithmic scoring based on your responses:

  • Your responses are processed to generate dimension scores and an overall Distributor Intelligence Score.
  • Financial impact estimates are generated using industry benchmarks and anonymized engagement data — they are directional estimates, not guarantees.
  • Your assessment data is not used to train AI models.
  • Your individual responses are never shared with other companies.
  • Aggregated, anonymized assessment data may be used in Torinit's research and content — but no individual company's responses are ever identifiable.

8. Updates to This Policy

We may update this AI Policy as our practices evolve, as regulations change, and as the AI landscape develops. We will post the updated policy with a revised "Last Updated" date. Material changes will be communicated through our website and, for active clients, through direct notification.

9. Questions and Concerns

If you have questions about this AI Policy or how Torinit handles data and AI systems, please contact us:

Torinit Technologies Inc.

AI Governance

Email: ai-governance@torinit.com

General inquiries: legal@torinit.com

155 Queens Quay E #200, Toronto, ON M5A 1B4, Canada