Frequently
Asked
Questions

How does ADEQUATA differ from legacy Data Quality tools?

Legacy tools rely on manually defined business rules that are time-consuming to set up and expensive to maintain. ADEQUATA believes most business rules are self-explanatory by the statistical distributions of the datasets themselves and uses ML-driven remediation to identify the inherent latent logic patterns and distributions within your dataset to fix issues automatically, without requiring you to write a single rule.

Is Axolotl using Generative AI (LLMs) to fix my data?

No. We believe LLMs and agents are best suited for workflow automation, not core data integrity. Axolotl uses analytical, research-grade ML models for high-precision remediation, ensuring your structured data is fixed with mathematical accuracy, not probabilistic guessing, to meet industrial-grade precision, determinism, and interpretability standard requirements.

What specific data quality issues does Axolotl resolve?

It provides an end-to-end self-healing capability for inconsistencies, inaccuracies, human errors, redundancies, and incompleteness via an orchestrated data remediation pipeline handling multiple aspects of of the most common data quality issues.

What is "Synthetic Data Imputation" and is it safe?

Unlike standard enrichment that pulls in potentially messy and unverified external data, our models use high-fidelity synthetic imputation to recover missing entries based on the internal consistency of your own dataset.

Do you support all data modalities?

Axolotl is a specialist engine for high-value structured datasets (tabular data in csv and parquet files). We focus on complex schema with heavy numerical and categorical contents where precision is non-negotiable.

How can you guarantee my data is safe?

We operate under a Zero-Persistence Framework. Your data is processed as an ephemeral stream in volatile memory (RAM) and never touches a persistent storage in our environment. We provide the "fix" without the footprint.

Does ADEQUATA use my data to train global AI models?

Strictly No. We warrant that customer data is never used to train, fine-tune, or improve our AI agents or any third-party global models.

What is the "Absolute Purge" Protocol?

Once a remediation job is written back to your environment, the active session and its memory buffers are instantly purged. We retain only minimal, anonymized telemetry (row counts, logic triggers) for compliances and billing.

Can I review the fixes before they go live?

Yes. As far as automations can go, we still believe human control is non-negotiable. Our architecture enforces a Human-in-the-Loop (HITL) Quality Gate. You receive a "Proposed" version and a comprehensive audit report. No data is promoted to your environment without your explicit "Accept" the certification. If you reject a fix, the system will simply persist your original/last-known copy and mark the run as complete.

Do you provide security documentation for procurement?

Yes. We can provide our Security Whitepaper and a standard Data Processing Agreement (DPA) upon request to simplify your compliance and legal review.

How does ADEQUATA integrate with my current stack?

We offer effortless integration with leading cloud platforms (Snowflake, Databricks etc.) and connectors for Google Drive, OneDrive/SharePoint. We can work as a standalone platform or connect to your existing workflow with zero added complexity.

What is "Stateless Architecture"?

Most platforms try to "own" your data to create vendor lock-in. Our stateless approach keeps governance and lineage fully under your control - the content of your datasets never show up in our backend storage system. We spin up dedicated, session-specific ML models that dissolve the moment the task is complete. You have full flexibility of controlling what datasets or which data path to run and write back.

Does this require a complex on-premise installation?

No. Our engine utilizes a powerful vertical scaling approach in a secure VPC environment. This makes it faster and more secure than traditional horizontal scaling while remaining ready-to-integrate for data-critical pipelines.

How is the service priced?

We use Data-Qualifying Units (DQUs). 1 DQU represents 1GB of logical data successfully remediated, can be adjusted by task complexity factor.

Will I be charged more for large, uncompressed files?

No. Pricing is driven by logical remediation volume (the count of records and attributes), not storage format. Whether your data is a raw CSV or a compressed Parquet file, you only pay for the actual volume of data points remediated.

Are there hidden fees for "Overage" or API calls?

Transparency is our priority. Your tier pricing and DQU overage consumptions are all you pay. Every job provides a detailed Remediation Summary so you can audit exactly what you are paying for.

What does the "Complexity Factor" mean?

Some datasets require more intensive ML processing (e.g., complex multi-stage feature engineering or precision tuning for deep synthetic imputation). The complexity factor ensures you pay a fair price based on the actual compute effort required to heal your specific dataset.