We are building a new AI/ML team at Parable, and are looking for a Senior/Staff SWE to join us.
This role places you in the eye of the AI storm – guiding the execution of greenfield projects in the rapidly evolving field of AI, shipping AI pipelines that provide deep insight to customers, and augmenting the product vision through ML.
You will work alongside our CTO and seasoned engineers on a team that develops AI agents and pipelines to drive insights for clients, and builds software to automate those insights.
You bring significant, hands-on experience building tools and workflows with LLMs. You have a deep understanding of the current technologies' capabilities and constraints, and you have a clear vision for where AI is headed in the near future. You want to build rapidly and yet remain close to our end users’ needs.
Parable’s mission is to make time matter.
Parable provides CEOs of companies with 1,000+ employees with deep observability of the time spent by their team across all strategies, projects, and processes. Our insights help teams focus on the work that matters the most, and drive data-driven decisions in resource allocation.
The company was founded by seasoned founders, with multiple 9-figure exits under their belts, and reached $2m of ARR within 6 months of going to market. Parable raised $17 million from investors like HOF Capital and Story Ventures, as well as 50+ founders and executives.
On the technical side, we are building an operating system for large enterprises – ingesting silo’ed data from across the workplace stack, structuring this data into our enterprise ontology, and contextualizing it to draw sharp insight on our client companies.
The Raw Data team ensures that client data lands reliably and securely in each client’s private Parable data lake. Our vision is simple: the more data we can connect, the better our results and the more value we deliver to clients.
The team’s focus areas include:
Connector Factory – Productizing the creation of API connectors and taps, reducing friction for engineers, and enabling faster expansion of data coverage .
Data Productization – Researching, documenting, and packaging data from diverse systems (HRIS, SaaS APIs, proprietary logs) for downstream Ontology and Context Mining teams .
Observability – Delivering tooling to track connector coverage, tap reliability, and overall data completeness .
Documentation & Developer Experience – Ensuring both internal and external stakeholders can easily understand and extend connector behavior.
The Raw Data engineers will work with a Technical Product Manager (TPM) who owns roadmap definition, client onboarding workflows, and connector prioritization. As Senior Engineer, you will be the TPM’s counterpart in driving technical execution and delivery.
Designing and building scalable ingestion pipelines to bring data from SaaS APIs, enterprise systems, and custom client sources.
Owning major projects within the Raw Data roadmap, from building CLI tooling to scaffolding GenAI-assisted taps, and leading technical design reviews.
Partnering closely with the TPM to translate connector roadmaps and product requirements into production-ready systems, ensuring smooth alignment between product intent and technical execution .
Improving developer experience — create tools, automation, and documentation that reduce connector/tap build time.
Delivering observability and reliability into data ingestion systems, ensuring internal teams and clients can trust coverage and completeness.
Setting technical direction for parts of the Raw Data platform, mentoring other engineers, and growing into a lead.
This is a high-impact IC role with technical leadership scope. You’ll work closely with other senior engineers and the Raw Data TPM.
Lead development of Connector Factory tooling, including CLI support, scaffolding frameworks, and GenAI-assisted workflows.
Ship new connectors and taps that unlock high-priority client data sources (e.g., HRIS, project management, communication logs).
Build observability features for connector health, coverage, and data volumes.
Contribute to API documentation and developer playbooks, enabling faster onboarding of new connectors.
Partner with the TPM and Ontology engineers to deliver the required raw data for the Ontology and Client Delivery (e.g., People, Projects, Goals).
Experience (6+ years) building large-scale data pipelines, distributed ingestion systems, or API-based integrations.
Proficiency in Python, Rust, TypeScript, and SQL, or comparable experience in similar ecosystems.
Hands-on experience with GCP (Cloud Run, Pub/Sub, Cloud Storage, BigQuery, Cloud SQL, Memorystore), Docker, and Pulumi — or equivalent technologies on AWS/Azure .
Strong grounding in data lake/warehouse design (Apache Iceberg or similar) and stream/batch processing.
Demonstrated ability to own end-to-end projects: from reviewing API docs and designing integration strategies to deploying production-ready services.
Solid grasp of security and reliability practices for sensitive enterprise data.
Excellent communication skills and the ability to mentor peers while driving technical decisions.
Interest in growing toward Staff Engineer scope — technical leadership without direct management.