THOR THUNDERSCAN · THESIS Security & Compliance

SOC 2 Compliance for Data-Driven SaaS

The Engineering and Operational Blueprint for Building Continuous SOC 2 Readiness into Your Data Platform

Isaac Shi, Co-Founder & GP of Golden Section VC
March 11, 2025
Share on LinkedIn Share on X
Database Normalization & Schema Design All Articles Relational vs. NoSQL: The Architect's Dilemma

Executive Summary

SOC 2 has quietly become one of the most important commercial gatekeepers in enterprise software sales. Industry surveys consistently show that 83%+ of enterprise buyers require a SOC 2 report before signing contracts with SaaS vendors managing sensitive data. Yet many data-driven SaaS companies treat SOC 2 as a compliance checkbox — a one-time audit performed to unlock a deal — rather than as the continuous security program it is designed to be.

The ongoing shift in auditor expectations:

The current operative standard is the 2017 Trust Services Criteria (TSC) with Revised Points of Focus (2022), published by the AICPA. While the formal document has not been fully reissued since 2022, auditor practice has evolved substantially: AI and ML governance, continuous automated monitoring evidence, and cloud configuration management are increasingly examined focal points in 2025 engagements. Data-driven SaaS companies face a materially more complex practical compliance landscape than they did two or three years ago.

83%+
of enterprise buyers require SOC 2 from SaaS vendors (GrayGroup / multiple surveys, 2025)
$12K–$100K+
typical SOC 2 Type II audit fee alone (Drata, 2025); total program cost is typically 2× higher
6–12 mo
average time to first SOC 2 Type II report
The White Tower at the Tower of London — built by William the Conqueror, a thousand-year-old fortress of layered defenses

The Tower of London wasn't secured by a single wall — it was built with two curtain walls, a moat, watchtowers, and the White Tower keep at its core. SOC 2 works the same way: no single control protects your data. Trust is earned through layers — access controls, encryption, audit logging, incident response — each one a wall your enterprise customers need to see before they hand you their data. Read the essay ↗

Why SOC 2 Is Now a Revenue Decision

SOC 2 was originally conceived as an auditing standard for technology service organizations. In practice, it has evolved into the primary trust signal that enterprise buyers use to evaluate B2B SaaS vendors. The commercial dynamics are stark: without a SOC 2 Type II report, most data-driven SaaS companies cannot access enterprise deals above $50K ACV.

The Revenue Impact of Compliance Posture

The cost calculus has shifted fundamentally. A first SOC 2 Type II program costs between $30K and $150K in total (audit fees of $12K–$100K+ plus tooling and engineering time). A single blocked enterprise deal often represents $200K to $2M in annual recurring revenue. The math is not ambiguous: delayed compliance is delayed revenue.

SOC 2 StatusDeal AccessSales Cycle ImpactTypical ACV Range
No SOC 2SMB onlyDisqualified from enterprise RFPs<$25K
SOC 2 Type IMid-marketUnblocks early-stage enterprise pilots$25K–$150K
SOC 2 Type II (12-mo)Full enterpriseEliminates security questionnaire delays$150K+
SOC 2 + AI GovernanceRegulated sectorsEnables BFSI, healthcare, government$500K+

Beyond deal access, SOC 2 shapes customer trust in ways that compound over time. Enterprise buyers share vendor security assessments with their own procurement and legal teams. A clean SOC 2 Type II report accelerates those internal approvals. A report with exceptions — even minor ones — creates disproportionate friction because security teams are trained to treat exceptions as risk signals, regardless of materiality.

The SOC 2 Framework in 2025: What Actually Changed

The Five Trust Services Categories

SOC 2 is organized around five Trust Services Categories (TSCs). Unlike ISO 27001, which prescribes specific controls, SOC 2 is outcome-based: you design controls that fit your environment, and auditors verify they are designed well and operating consistently. This flexibility is powerful but demands disciplined design.

REQUIRED

Security (CC)

Protection against unauthorized access, both logical and physical. Required for all SOC 2 audits. Covers access management, change control, incident response, and monitoring.

COMMON ADD-ON

Availability (A)

System availability per commitments. Covers uptime SLAs, disaster recovery, capacity planning, and incident management affecting availability.

DATA PLATFORMS

Processing Integrity (PI)

System processing is complete, valid, accurate, timely, and authorized. Critical for data pipelines, analytics platforms, and AI systems that transform or route data.

DATA PLATFORMS

Confidentiality (C)

Information designated as confidential is protected. Covers encryption, access controls, classification, and disposal. Especially relevant for B2B platforms handling customer data.

Three Evolving Auditor Focal Points in 2025

Note: The operative TSC document remains the 2017 TSC (Revised Points of Focus — 2022). The following reflects documented shifts in auditor practice and examiner focus areas for current engagements, not a new formal AICPA release.

Focal Point 1: AI and Machine Learning Governance

Auditors increasingly examine AI systems used in service delivery. Focal areas include: (1) training data provenance and quality controls, (2) model validation and testing before production deployment, (3) human oversight mechanisms for automated decisions affecting customers, and (4) security controls protecting AI models from tampering or data poisoning. For data-driven SaaS companies, this is the most operationally significant shift — it means your RAG pipeline, ML models, and AI-powered features are now commonly in scope for SOC 2 review.

Focal Point 2: Cloud Configuration Management

Cloud infrastructure is now scrutinized for configuration drift, not just access controls. Auditors expect documented baselines, automated drift detection (via AWS Config, Azure Policy, or GCP Security Command Center), and evidence of remediation when drift is detected. Multi-cloud environments require explicit shared-responsibility documentation for each provider.

Focal Point 3: Continuous Monitoring Evidence

Manual quarterly evidence collection is increasingly insufficient. Auditors expect continuous automated monitoring via SIEM, CSPM, and automated control testing. Point-in-time screenshots are no longer considered adequate evidence for many control areas. Automated evidence pipelines have shifted from differentiator to expectation.

AI Governance Controls: The New Compliance Frontier

For data-driven SaaS companies using AI, the 2025 Trust Services Criteria introduce a set of governance controls that most teams have not yet systematically addressed. Understanding exactly what auditors examine — and how to design controls that satisfy these requirements — is critical for companies in or approaching SOC 2 audits.

Control Area 1: Training Data Governance

Auditors now ask: where does your training data come from, how is its quality assured, and how do you prevent customer data from appearing in training sets without appropriate authorization?

ControlWhat Auditors ExamineImplementation Pattern
Data lineage documentation Can you trace every training record to its source? Metadata table with source_system, collection_date, consent_basis per record
PII exclusion in training Evidence that PII is scrubbed before model training Automated redaction pipeline with hash-based verification; column-level PII tags in schema
Data quality validation Controls ensuring training data meets quality thresholds dbt tests or Great Expectations checks run before training jobs, with pass/fail evidence captured
Customer data isolation No cross-tenant training contamination Separate training datasets per tenant, or explicit consent tracking in data catalog

Control Area 2: Model Deployment and Change Management

AI model deployments must be treated as system changes within the SOC 2 change management framework. Auditors increasingly expect model versioning, pre-production validation, and rollback procedures to be as rigorous as code deployments.

REQUIRED ARTIFACTS
  • Model performance benchmarks before/after deployment
  • Approval workflow evidence (who approved promotion to production)
  • Rollback procedure with documented test validation
  • Model card documenting intended use, limitations, known biases
  • Test dataset with held-out validation split documented
  • Monitoring configuration (what alerts on model drift)
  • Human escalation path for low-confidence outputs
  • Embedding model version tracking for RAG systems

Control Area 3: Human Oversight Mechanisms

Automated AI decisions affecting customer outcomes — credit decisions, fraud flags, content moderation — require documented human review paths. Auditors examine whether high-impact AI decisions have an appeal or override mechanism and whether those overrides are logged. The control does not require human approval of every decision; it requires that humans can review and override decisions when appropriate, and that such activity is auditable.

Data Access Controls for AI Systems

Data-driven SaaS platforms face access control challenges that are fundamentally different from simple CRUD applications. AI pipelines, batch processing jobs, ML training infrastructure, and vector search systems all need access to sensitive data — often with broader permissions than application service accounts. Designing least-privilege access for these systems while maintaining AI pipeline functionality is one of the most technically demanding aspects of SOC 2 for AI companies.

The AI Access Control Matrix

System ComponentData Access NeededMinimum Privilege PatternAudit Evidence Required
Feature pipeline jobs Read-only to operational tables Dedicated service account with SELECT grants on specific schemas only Service account creation, grant records, quarterly review
ML training jobs Read-only to feature store IAM role with row-level security filtering by authorized training datasets Job execution logs, data access logs, training run metadata
Embedding pipeline Read source content, write embeddings Separate accounts for read (source) and write (vector store); no access to raw PII Pipeline execution logs, which documents were embedded and when
LLM inference (RAG) Read vector index + retrieved chunks Query-time tenant_id filter enforced at database layer, not application layer Query logs showing tenant_id filter was applied; rate limiting by tenant
Model evaluation jobs Read test datasets + model outputs Time-boxed credentials (expire after evaluation run); read-only Credential lifecycle records, evaluation job completion logs

Row-Level Security for Multi-Tenant AI

For multi-tenant SaaS platforms, row-level security (RLS) at the database layer is the most reliable way to prevent cross-tenant data leakage in AI pipelines. Application-layer filtering is insufficient for SOC 2 — auditors expect database-native enforcement where possible.

-- PostgreSQL Row-Level Security for multi-tenant AI data ALTER TABLE document_chunks ENABLE ROW LEVEL SECURITY; -- Enforce tenant isolation at database layer — not application layer CREATE POLICY tenant_isolation_policy ON document_chunks USING (tenant_id = current_setting('app.current_tenant_id')::BIGINT); -- Set tenant context in connection pool before every query -- Application must call: SET app.current_tenant_id = '123' SELECT * FROM document_chunks ORDER BY embedding <=> '[0.1, 0.2, ...]'::vector LIMIT 10; -- RLS automatically adds: AND tenant_id = 123 -- Cannot be bypassed by application code

Audit Logging Architecture for Data Platforms

Data center server racks with blue lighting — the physical infrastructure underpinning audit logging and security compliance

Three-tier audit logging: immutable infrastructure logs → application event streams → encrypted long-term storage vault.

SOC 2 requires comprehensive audit logs covering authentication events, data access, system changes, and security incidents. For data-driven SaaS, the volume and complexity of relevant events is orders of magnitude higher than a simple web application. Designing audit logging that satisfies auditors without overwhelming your operations team requires careful architecture.

Three-Tier Audit Logging Architecture

T1

Infrastructure Layer — Immutable, High-Volume

Cloud provider native logging · PostgreSQL pg_audit · S3 access logs

Captures every database query, API call, and storage operation. Immutable because it is written by the infrastructure provider rather than the application. Retention: 90 days hot, 1 year cold in encrypted S3/GCS. Key events: all SQL statements above threshold (configurable by query cost), all authentication attempts, all schema changes, all DDL operations.

T2

Application Layer — Semantic, Business-Context-Rich

Application audit trail · User action logs · Data export records

Records business-level events with user context: who exported which dataset, who modified which model configuration, who accessed which customer records, and why (including the reason code if captured). This layer provides the human-readable narrative that auditors and your own security team need for investigations. Schema: (event_type, actor_id, actor_type, resource_type, resource_id, action, metadata_jsonb, ip_address, session_id, occurred_at).

T3

SIEM Layer — Correlated, Alerting, Retention-Managed

Datadog SIEM · Splunk · AWS Security Hub + CloudTrail Lake

Aggregates and correlates events from T1 and T2 into security-relevant signals. Generates alerts for: unusual access patterns (user accessing 1000x their normal data volume), off-hours administrative access, failed authentication cascades, anomalous embedding queries (potential data exfiltration via semantic search). Retention policies enforced here. SOC 2 evidence dashboards generated automatically.

AI-Specific Log Events

Standard logging frameworks were not designed for AI systems. Data-driven SaaS companies must extend their logging to capture AI-specific events that auditors increasingly examine:

Event TypeWhy It Matters for SOC 2Key Fields to Log
RAG query with retrieval context Demonstrates tenant isolation in semantic search; enables anomaly detection tenant_id, query_vector_hash, retrieved_chunk_ids, latency_ms, result_count
Embedding pipeline execution Proves which data was processed; enables staleness audits pipeline_id, document_count, embedding_model, start_time, end_time, error_count
Model inference request Tracks customer data processed by AI; supports GDPR data mapping model_id, model_version, input_hash (not raw input), tenant_id, inference_time_ms
Model promotion to production Change management evidence for AI deployments model_id, from_version, to_version, approver_id, test_results_url, deployed_at
Training job execution Data lineage for regulatory audits; evidence of PII exclusion job_id, dataset_version, training_record_count, pii_check_passed, run_by, duration

Third-Party and Vendor Risk Management

Data-driven SaaS companies typically have a larger and more complex vendor footprint than traditional software companies. AI model providers (OpenAI, Anthropic, Cohere), vector database services (Pinecone, Weaviate Cloud), data pipeline platforms (Fivetran, dbt Cloud), and cloud infrastructure providers all sit in the data path and must be evaluated under the SOC 2 vendor risk management framework.

Vendor Risk Tiering for AI Systems

Vendor TierExamplesDue Diligence RequiredReview Frequency
Tier 1 — Critical Data Path Cloud provider, primary database, LLM API provider, vector DB SOC 2 Type II report review, contract DPA, data residency confirmation, sub-processor disclosure Annual + on incident
Tier 2 — Data Processing ETL tools, analytics platforms, embedding API providers, monitoring SaaS SOC 2 or ISO 27001 report, security questionnaire, data retention policy review Annual
Tier 3 — Support Services Communication tools, project management, HR platforms Security questionnaire, privacy policy review Biennial

The LLM Provider Problem

Using third-party LLM APIs (OpenAI, Anthropic, Claude) creates a unique vendor risk challenge: customer data sent to these APIs is potentially processed in ways that may not align with your SOC 2 commitments. The 2025 criteria specifically address this through AI model governance requirements. The control pattern that auditors accept:

✓ ACCEPTED CONTROL PATTERN FOR LLM API VENDORS
  • 1. Data classification before sending: Only send data classified as "AI-processable" to external LLM APIs. PII must be redacted or pseudonymized at the application layer before transmission.
  • 2. Enterprise API agreements: Use enterprise-tier API agreements that include zero data retention clauses (OpenAI Enterprise, Anthropic's commercial API with data processing addendum).
  • 3. Transmission logging: Log every API call that transmits customer data context, including which tenant's data was included and timestamp.
  • 4. Contract DPA: Maintain a signed Data Processing Agreement with each LLM provider that classifies them as a sub-processor under your privacy commitments.

Automation Strategy: How to Make SOC 2 Operationally Sustainable

The single most important investment for sustainable SOC 2 compliance is evidence automation. Teams that collect evidence manually — screenshots, exports, quarterly spreadsheet reviews — spend 40–60 hours per audit cycle on evidence collection alone (Screenata, 2025). Teams with mature automation can reduce that to under two hours per cycle. The difference is not marginal; it is the difference between compliance that is sustainable and compliance that burns out your engineering team.

Compliance Automation Platform Comparison

PlatformBest ForKey StrengthAnnual Cost (Est.)
Vanta Startups moving fast, first SOC 2 1,200+ pre-built tests running hourly; fastest time to first report; strong startup ecosystem $15K–$40K/yr
Drata Series B+ companies needing customization Deep workflow customization; strong multi-framework support (SOC 2 + ISO + HIPAA simultaneously) $20K–$60K/yr
Secureframe Mid-market, multiple frameworks Penetration testing bundled; good value for multi-framework compliance programs $15K–$35K/yr
Thoropass Audit-inclusive pricing preferred Auditor included in platform; eliminates separate auditor procurement $25K–$50K/yr (all-in)
⚠ IMPORTANT: Platform ≠ Compliance

Compliance automation platforms dramatically reduce overhead but do not eliminate engineering work. You still need to implement the actual security controls. Vanta can automatically verify that MFA is enabled — but you still have to enable MFA and enforce it. The platform evidence is only as good as the controls it is monitoring.

What to Automate vs. Keep Manual

✓ AUTOMATE THESE
  • • MFA enforcement verification (all users, all systems)
  • • Access review scheduling and evidence collection
  • • Security training completion tracking
  • • Vendor SOC 2 report expiry monitoring
  • • Cloud configuration drift detection
  • • Encryption-at-rest and in-transit verification
  • • Vulnerability scan scheduling and result capture
  • • Employee onboarding/offboarding access provisioning
⚠ KEEP THESE MANUAL (WITH DOCUMENTED PROCESS)
  • • Annual risk assessment review and updates
  • • Incident postmortem writing and review
  • • Vendor security questionnaire interpretation
  • • Policy exception approval and review
  • • Penetration testing scope definition and review
  • • AI model governance decisions
  • • SOC 2 scope boundary decisions

Realistic Timeline and Cost for Data-Driven SaaS

The internet is full of "get SOC 2 in 30 days" marketing claims. These are misleading for data-driven SaaS companies with complex AI pipelines. Here is the realistic picture based on actual timelines from B2B SaaS companies with data platform complexity comparable to a ThunderScan-style product.

Phase-by-Phase Timeline

PHASE 1

Gap Assessment and Scope Definition (Weeks 1–4)

Activities

  • • Map all data flows: ingestion, processing, storage, access
  • • Identify all AI systems in scope (models, pipelines, RAG)
  • • Conduct gap assessment against Trust Services Criteria
  • • Select compliance automation platform
  • • Define audit scope boundaries with future auditor

Typical Costs

  • • Compliance platform setup: $5K–$15K
  • • Consultant gap assessment: $8K–$20K
  • • Engineering time: 40–80 hours
  • • Total phase 1: ~$20K–$40K
PHASE 2

Control Implementation (Months 2–4)

The heaviest engineering investment. For data-driven SaaS, this includes implementing database audit logging, configuring row-level security, building evidence-generating automation for AI pipeline logs, establishing change management process for model deployments, and remediating cloud configuration gaps. Typical investment: 200–400 engineering hours, $30K–$80K total.

PHASE 3

Evidence Accumulation Period (Months 4–10)

SOC 2 Type II requires evidence that controls operated consistently over the observation period (typically 6–12 months). This phase is primarily about operating the controls you built, collecting evidence, and resolving any gaps discovered. Monthly review cycles with the compliance platform are essential to catch issues before they become audit findings.

PHASE 4

Formal Audit (Months 10–12)

Auditor fieldwork (3–6 weeks), evidence review, and report issuance. Budget $15K–$50K for auditor fees depending on scope. Larger scopes (multiple Trust Services Categories, complex AI systems, many integrations) drive cost toward the upper end. After the first Type II audit, subsequent annual audits are typically less expensive because controls are mature and evidence is automated.

Where Data-Driven SaaS Teams Fail

⚠ FAILURE MODE 1: Scoping AI Systems Too Late

Teams often define their SOC 2 scope before their AI systems are mature, then add AI features during the observation period without updating the scope. Auditors flag this as a scope gap. Fix: define scope explicitly enough to cover "AI-powered features operating in production" even if specific implementations change. Revisit scope whenever a new AI system is launched.

⚠ FAILURE MODE 2: Treating Vector Databases as Out of Scope

Teams that added Pinecone, Weaviate, or pgvector embeddings after their initial scope definition often fail to include vector databases in access control reviews, encryption checks, or vendor risk assessments. Vector stores contain customer data (the source documents being embedded) and must be treated with the same rigor as the primary relational database.

⚠ FAILURE MODE 3: Model Deployments Without Change Records

Pushing a new ML model or updated embedding to production without a change record violates SOC 2 change management controls. Fix: treat every model artifact as a versioned deployable. Use your existing deployment pipeline (GitOps, Terraform) for model deployments, and ensure every production promotion generates an artifact in your change management system with approvals captured.

⚠ FAILURE MODE 4: LLM API Calls Without Data Classification

Sending customer data to external LLM APIs without first classifying and potentially redacting PII creates a confidentiality control gap. Fix: implement a data classification step in your LLM integration layer. Tag each chunk of context being sent (contains PII, contains customer content, contains system-only data). Apply redaction to PII-tagged content before transmission.

⚠ FAILURE MODE 5: CUEC Blindness

Complementary User Entity Controls (CUECs) are controls that your vendors place on you — they appear in your vendors' SOC 2 reports and describe what you are responsible for doing to maintain security. Most teams never read their vendors' SOC 2 reports carefully enough to identify CUECs. Fix: when collecting vendor SOC 2 reports, specifically extract and document the CUEC section for each critical vendor, then verify you are actually implementing each required control.

Database-Specific Controls That Auditors Examine

For data-driven SaaS companies, the database layer receives heightened scrutiny because it is where the most sensitive customer data resides. Auditors have become increasingly sophisticated in evaluating database security controls, particularly in the context of AI data access patterns.

Essential Database Security Controls

Encryption Controls

  • • Encryption at rest (AES-256 minimum, KMS-managed keys)
  • • TLS 1.2+ for all connections (no TLS 1.0/1.1)
  • • Column-level encryption for highest-sensitivity fields (SSN, payment data)
  • • Vector embeddings encrypted at rest (they can reconstruct source content)

Access Controls

  • • No shared database credentials (each service account is unique)
  • • Least-privilege: AI pipelines get SELECT only on specific schemas
  • • Quarterly access reviews with evidence captured
  • • MFA required for all human access to production databases

Audit Logging

  • • pg_audit enabled with DDL and role change logging at minimum
  • • Logs shipped to immutable external store (CloudTrail Lake, S3 with WORM)
  • • Alerting on high-volume data exports (>10K rows in single query)
  • • Alerts on off-hours human access to production

Network Controls

  • • Database not publicly accessible (VPC-only, no public endpoint)
  • • Separate security groups for application servers vs. AI pipeline instances
  • • VPN or private link required for all developer access
  • • IP allowlisting for all production database connections

Conclusion

SOC 2 compliance for data-driven SaaS is more complex than it was two years ago, and will become more complex as AI governance requirements mature. The 2025 AICPA updates — requiring explicit AI governance, continuous monitoring evidence, and cloud configuration management — mean that companies which designed their compliance programs around pre-AI workflows face non-trivial remediation work.

The organizations that handle this transition well share three characteristics: they treat SOC 2 as an engineering problem, not a paperwork problem; they invest in automation infrastructure early enough to capture evidence over meaningful time periods; and they scope their AI systems explicitly from the beginning rather than treating them as out-of-scope novelties.

The bottom line for data-driven SaaS founders and CTOs:

Every enterprise deal your sales team pursues requires SOC 2. Every month without it is a month of qualified deals that cannot close. The $50K–$150K total investment in a first SOC 2 Type II program — properly designed and automated — typically pays back in the first blocked deal it unblocks. The question is not whether to pursue SOC 2, but how to build it in a way that does not require continuous heroic effort to maintain.

Sources & Further Reading

  1. DSALTA. (2025). SOC 2 Best Practices 2025: Your Complete Guide to Modern Compliance Excellence.
  2. Drata. (2025). How Much Does a SOC 2 Audit Cost?
  3. Vanta. (2025). SOC 2 Audit Timeline: What to Expect.
  4. SecureLeap. (2025). SOC 2 Tools Comparison 2025: Vanta, Drata, and Secureframe.
  5. AICPA-CIMA. (2022). 2017 Trust Services Criteria for Security, Availability, Processing Integrity, Confidentiality, and Privacy (With Revised Points of Focus — 2022). American Institute of CPAs.

Further Reading from the Author

Isaac Shi writes about AI, software, and entrepreneurship at isaacshi.com. These essays provide the strategic and philosophical context behind this thesis.

Essay · Isaac Shi
The Tower of London & Cybersecurity
How a 1,000-year-old fortress's defense architecture mirrors what modern SaaS companies must build — layered, deliberate, and never optional.
Essay · Isaac Shi
AI: White Walkers or HAL 9000?
The broader threat landscape that makes compliance infrastructure like SOC 2 strategically essential in an AI-accelerated world.
Found this useful?
Share with your team and help more SaaS companies navigate compliance.
Share on LinkedIn Share on X Download PDF
Continue Reading
© 2026 Thor ThunderScan  ·  ← Back to Thesis  ·  Start Scanning →