ML Platform Engineer

About THIA

THIA is transforming how small and medium enterprises build internal applications and automate business processes. Our AI-powered platform enables business experts to create custom applications using natural language, eliminating the need for expensive development teams. We're well-funded, generating revenue, and solving real problems for companies that need more than off-the-shelf software.

The Role

This is the role for an engineer who builds the systems ML runs on. You'll own model serving, eval pipelines, and the observability layer that makes everything inspectable - the work that makes the rest of the ML team faster. You won't be training models, but you'll need to understand them well enough to debug serving and eval pipelines when they misbehave. You'll work closely with a small, senior team and have direct influence over how our ML stack is built.

We move fast, keep our codebase clean, and take tech debt seriously.

What You'll Do

ML Platform Engineering

  • Build and operate model serving infrastructure: routing, batching, autoscaling, latency, cost
  • Build eval pipelines and observability tooling that make assistant behavior inspectable
  • Build batch inference and data pipelines that feed training and evaluation
  • Support the multi-tenant rollout: tenant-aware routing, isolation, and resource management
  • Read ML code well enough to debug serving and eval pipelines end to end

Collaboration

  • Work autonomously while staying tightly coordinated with a small, async-first team
  • Partner with the ML team to make their iteration loops faster
  • Contribute to architectural decisions and internal documentation

What We're Looking For

Must-Haves

  • Strong Python; comfortable with at least one other production language (Go, Java, TypeScript, C, etc.)
  • Production experience with backend services and one or more of: model-serving infra, batch inference pipelines, queue-based pipelines, or large-scale data processing
  • Distributed-systems fundamentals: queues, autoscaling, observability
  • Cloud infrastructure experience (GCP/AWS/Azure)
  • Able to read ML code well enough to debug serving + eval pipelines, or willing to learn

Strongly Preferred

  • LLM-specific infra: routing, batching, KV-cache management, structured generation
  • Eval pipelines or LLM observability (OpenTelemetry traces, LangSmith, Phoenix, custom)
  • Multi-tenant SaaS infrastructure experience

You Don't Need

  • Experience training models from scratch - this role is about the systems around them

How We Evaluate 

We hire for skill and potential, however acquired. If you can do the work, we want to hear from you.

A Note on AI

We actively encourage using AI tools to move faster. Real-world experience is still required - to direct AI effectively, catch what it misses, and spot security issues before they reach production.

Our Stack

Python · TypeScript · Modal · GCP · PostgreSQL / SQLite · Qdrant · Redis · Terraform · Docker · GitLab CI/CD · Datadog · Wiz

What You Gain

  • Ownership - end-to-end accountability for ML platform infrastructure at a growing AI company
  • Impact - direct collaboration with leadership and real influence on technical direction
  • Growth - clear path to a lead role as the team expands
  • Equity - early-stage equity at an AI startup
  • Flexibility - fully remote with flexible hours
  • Quality - a clean codebase and a team that takes tech debt seriously

 


Corvallis, United States

Senior Data Engineer / Platform Engineer

About THIA

THIA is transforming how small and medium enterprises build internal applications and automate business processes. Our AI-powered platform enables business experts to create custom applications using natural language—eliminating the need for expensive development teams. We're well-funded, generating revenue, and solving real problems for companies that need more than off-the-shelf software.

The Role

This is a high-impact, hands-on role where data engineering and cloud infrastructure are equally part of the job. You'll own delivery across multiple products and environments — designing schemas, building pipelines, writing Terraform, and spinning up new environments from templates. You'll work closely with a small, senior team and have real influence on technical direction.

We move fast, keep our codebase clean, and take tech debt seriously.

What You'll Do

Platform & Data Engineering

  • Own end-to-end infrastructure delivery across all products and environments — from Terraform modules to production deployments
  • Write and maintain Terraform for our full cloud environment: networking, compute, databases, storage, and secrets management
  • Deploy and operate containerized services; spin up new environments from templates
  • Design, build, and optimize PostgreSQL databases and data models for our in-house built data platform architecture
  • Build and maintain ETL/ELT pipelines and file-processing functions
  • Manage multi-environment deployments via GitLab CI/CD
  • Maintain authentication infrastructure (Keycloak/OIDC) across products and client environments
  • Manage secrets, service accounts, and IAM across multiple cloud projects
  • Implement security controls aligned with SOC2; remediate findings and keep access policies current
  • Handle production issues and incident response

Collaboration

  • Work autonomously while staying tightly coordinated with a small, async-first team
  • Collaborate with AI and App engineering teams on service integrations and API design
  • Contribute to architectural decisions and internal documentation

What We're Looking For

Must-Haves

  • Strong PostgreSQL expertise: schema design, query optimization, performance tuning
  • Hands-on Terraform experience managing real cloud infrastructure
  • Docker and managed container platform experience
  • Experience building and maintaining CI/CD pipelines
  • Cloud infrastructure experience — GCP preferred, any major cloud considered
  • Self-directed, ownership mentality

Strongly Preferred

  • Past ownership and maintenance of a production system used by real customers
  • Background in a startup environment

Nice to Have

  • Terminal/CLI-first approach — preference for scripted, repeatable solutions over manual UI workflows

A Note on AI

We actively encourage using AI tools to move faster. Real-world experience is still required — to direct AI effectively, catch what it misses, and spot security issues before they reach production.

Our Stack

GCP · PostgreSQL · Qdrant · Redis · Terraform · Docker · GitLab CI/CD · Keycloak · OAuth2/OIDC · Datadog · Wiz

What You Gain

  • Ownership — end-to-end accountability for infrastructure at a growing AI company
  • Impact — direct collaboration with leadership and real influence on technical direction
  • Growth — clear path to a lead role as the team expands
  • Equity — early-stage equity at an AI startup
  • Flexibility — fully remote with flexible hours
  • Quality — a clean codebase and a team that takes tech debt seriously

 

Corvallis, United States

Machine Learning Engineer

About THIA

THIA is transforming how small and medium enterprises build internal applications and automate business processes. Our AI-powered platform enables business experts to create custom applications using natural language, eliminating the need for expensive development teams. We're well-funded, generating revenue, and solving real problems for companies that need more than off-the-shelf software.

The Role

This is a generalist ML role for someone who wants depth in modeling and pragmatism about everything else it takes to ship. You'll pick up ML features end to end - framing the problem, picking the right approach (fine-tune, prompt, retrieve, or something cheaper), building the eval that tells us it works, and getting it into production. You'll work closely with a small, senior team and have real influence on how we build.

We move fast, keep our codebase clean, and take tech debt seriously.

What You'll Do

ML Engineering

  • Ship ML features end to end - problem framing, modeling, evaluation, deployment, iteration
  • Make pragmatic tradeoffs across fine-tuning, prompting, and retrieval based on what the problem actually needs
  • Build and extend evaluation pipelines: offline metrics, regression detection, eval datasets
  • Work with cloud ML infrastructure (training, serving, monitoring)
  • Help drive concrete quality gains through eval results, customer feedback, and prompt iteration

Collaboration

  • Work autonomously while staying tightly coordinated with a small, async-first team
  • Partner with the platform team on serving and observability, and with product on what to ship next
  • Contribute to architectural decisions and internal documentation

What We're Looking For

Must-Haves

  • Strong Python; comfortable with at least one deep-learning framework (PyTorch preferred)
  • Has trained or fine-tuned transformer-based models and seen them deployed
  • Has built or substantially contributed to a model-evaluation pipeline
  • Comfortable with cloud ML infrastructure (training, serving, monitoring)
  • Experience collaborating with product or domain experts to ship real features

Strongly Preferred

  • LLM eval / observability work (LLM-as-judge, trace enrichment)
  • RAG / retrieval system experience
  • MLOps tooling (model registries, feature stores, experiment tracking)

You Don't Need

  • Mastery of every part of the ML stack - depth in one area + working knowledge across the rest is the target

How We Evaluate 

We hire for skill and potential, however acquired. If you can do the work, we want to hear from you.

A Note on AI

We actively encourage using AI tools to move faster. Real-world experience is still required — to direct AI effectively, catch what it misses, and spot security issues before they reach production.

Our Stack

Python · PyTorch · HuggingFace · Modal · GCP · PostgreSQL / SQLite · Qdrant · Redis · Docker · GitLab CI/CD · Datadog

What You Gain

  • Ownership - end-to-end accountability for ML features at a growing AI company
  • Impact - direct collaboration with leadership and real influence on technical direction
  • Growth - clear path to a lead role as the team expands
  • Equity - early-stage equity at an AI startup
  • Flexibility - fully remote with flexible hours
  • Quality - a clean codebase and a team that takes tech debt seriously

 


Corvallis, United States

Applied ML Engineer

About THIA

THIA is transforming how small and medium enterprises build internal applications and automate business processes. Our AI-powered platform enables business experts to create custom applications using natural language, eliminating the need for expensive development teams. We're well-funded, generating revenue, and solving real problems for companies that need more than off-the-shelf software.

The Role

This is a hands-on ML role where you'll own modeling work end to end - fine-tuning the language models that power our platform, building the eval frameworks that tell us whether they're getting better, and shipping the result into production. You'll work closely with a small, senior team and have direct influence over what we train, how we measure quality, and how the model evolves alongside the product.

We move fast, keep our codebase clean, and take tech debt seriously.

What You'll Do

Modeling & Evaluation

  • Fine-tune transformer-based models (instruction tuning, LoRA/PEFT, RLHF/DPO, distillation) and ship the result through eval into production
  • Design and curate evaluation datasets that meaningfully reflect real customer behavior
  • Build LLM-as-judge pipelines and align them against human judgment
  • Run experiments end to end: hypothesis → controlled comparison → calibrated metrics → decision
  • Own model-quality monitoring in production and close the loop back into training data

Collaboration

  • Work autonomously while staying tightly coordinated with a small, async-first team
  • Partner with the platform team on model serving, observability, and the multi-tenant rollout
  • Contribute to architectural decisions and internal documentation

What We're Looking For

Must-Haves

  • Strong Python; working expertise with PyTorch and the HuggingFace transformers ecosystem
  • Hands-on fine-tuning experience on transformer-based models — at least one shipped or rigorously evaluated result
  • Experimental rigor: hypothesis design, controlled comparisons, calibrated metrics
  • Has carried at least one ML feature through the full lifecycle (data → train → eval → deploy → monitor)
  • Cloud ML lifecycle experience (GCP/AWS/Azure)

Strongly Preferred

  • LLM-as-judge eval pipelines and human-judgment alignment
  • RAG or retrieval system design experience

How We Evaluate 

We hire for skill and potential, however acquired. If you can do the work, we want to hear from you.

A Note on AI

We actively encourage using AI tools to move faster. Real-world experience is still required - to direct AI effectively, catch what it misses, and spot security issues before they reach production.

Our Stack

Python · PyTorch · HuggingFace · Modal · GCP · PostgreSQL / SQLite · Qdrant · Redis · GitLab CI/CD · Datadog

What You Gain

  • Ownership - end-to-end accountability for the models that power a growing AI company
  • Impact - direct collaboration with leadership and real influence on technical direction
  • Growth - clear path to a lead role as the team expands
  • Equity - early-stage equity at an AI startup
  • Flexibility - fully remote with flexible hours
  • Quality - a clean codebase and a team that takes tech debt seriously


Corvallis, United States

Backend Engineer, ML Systems

About THIA

THIA is transforming how small and medium enterprises build internal applications and automate business processes. Our AI-powered platform enables business experts to create custom applications using natural language, eliminating the need for expensive development teams. We're well-funded, generating revenue, and solving real problems for companies that need more than off-the-shelf software.

The Role

This is a backend role for an engineer who's curious about ML but not pretending to be an ML researcher. You'll build the API surface, orchestration layer, and tenant-aware services that sit between our customers and our ML core. You'll integrate models, not train them - but you'll be in the same room as people who do, and you'll learn as much about ML as you want to. You'll work closely with a small, senior team and have real influence on how the system is built.

We move fast, keep our codebase clean, and take tech debt seriously.

What You'll Do

Backend Engineering

  • Design and build tenant-aware API services: REST/RPC, auth, multi-tenancy, audit logging
  • Build async pipelines and background jobs that feed and consume the ML layer
  • Design data models and schemas in Postgres; integrate with vector and cache layers
  • Integrate with model-serving systems as a consumer - you build clean interfaces, the ML team builds the models behind them
  • Help drive the multi-tenant rollout across customer environments

Collaboration

  • Work autonomously while staying tightly coordinated with a small, async-first team
  • Partner with the ML team to consume their systems cleanly and feed back operational signal
  • Contribute to architectural decisions and internal documentation

What We're Looking For

Must-Haves

  • Strong Python and at least one other production language (TypeScript, Go, Java, C, etc.)
  • Production experience with relational databases (Postgres, SQLite, or similar) and at least one queue / streaming system
  • API design (REST and/or RPC); auth and authorization patterns
  • Cloud infrastructure experience (GCP preferred)
  • Has shipped code into production environments with real customers and uptime expectations

Strongly Preferred

  • Experience integrating with LLM APIs or model-serving systems as a consumer
  • Multi-tenant SaaS experience
  • Background-job / async-pipeline systems
  • Vector DB or hybrid retrieval experience
  • ML modeling experience — you'll integrate models, not train them

You Don't Need

  • ML modeling experience - you'll integrate models, not train them

How We Evaluate 

We hire for skill and potential, however acquired. If you can do the work, we want to hear from you.

A Note on AI

We actively encourage using AI tools to move faster. Real-world experience is still required - to direct AI effectively, catch what it misses, and spot security issues before they reach production.

Our Stack

Python · TypeScript · GCP · PostgreSQL / SQLite · Qdrant · Redis · Docker · GitLab CI/CD · Keycloak · OAuth2/OIDC · Datadog

What You Gain

  • Ownership - end-to-end accountability for backend systems at a growing AI company
  • Impact - direct collaboration with leadership and real influence on technical direction
  • Growth - clear path to a lead role as the team expands
  • Equity - early-stage equity at an AI startup
  • Flexibility - fully remote with flexible hours
  • Quality - a clean codebase and a team that takes tech debt seriously


Corvallis, United States
About us

We are a team of passionate people whose goal is to improve everyone's life through disruptive products. We build great products to solve your business problems.