Frequently asked questions
Answers about scope, pricing, delivery model, and collaboration.
Do you work fixed-price or hourly?
Both. Fixed-price works best when scope is clear; otherwise we start hourly with a spending cap.
Can you start with a small paid test?
Yes. A 1–2 week paid pilot is often the fastest way to validate fit and reduce risk.
Can you work with existing teams?
Yes. We can integrate with your repositories, CI and existing delivery process.
Do you sign NDAs?
Yes. For sensitive work, we sign an NDA before starting discovery.
Do you write tests?
Yes—pragmatically. We focus tests where they protect critical behavior and prevent regressions.
Do you offer ongoing support?
Optional. We can provide a lightweight maintenance agreement or ad-hoc support.
Can you take over a partially finished project?
Yes. We start with a short audit to understand risks, then propose a realistic recovery plan.
Do you help with deployment and hosting?
Yes. We can set up or improve CI/CD and infrastructure pragmatically (cloud or managed platforms).
Do you provide a warranty period after delivery?
Yes—typically a short post-delivery window for fixes, plus optional ongoing support.
How do you handle technical debt in an existing codebase?
We prioritize debt that blocks delivery or causes incidents, then address it incrementally alongside features.
How do you handle data privacy and sensitive information?
We minimize data access by design. Where possible we run processing on-device or in your environment, restrict retention, and apply anonymization/redaction before sending anything to third parties. We can sign an NDA and document data flows.
Do you use our data to train models?
No. We do not use client data to train our own models. If a solution uses third-party APIs, we configure them to avoid training on your inputs where that option exists, and we can propose alternatives (self-hosted or on-prem) when required.
How do you evaluate AI output quality and prevent failures?
We add guardrails: evaluation sets, automated checks, confidence thresholds, logging, and human-in-the-loop review where needed. For production we design fallback behavior (rules, queues, manual review) so the system degrades safely instead of breaking.