Location: Remote with quarterly travel to DC, Indianapolis, or Gaithersburg MD Clearance: Public Trust
The candidate should travel to one of the facilities in either Indianapolis or Gaithersburg MD on a quarterly basis for two days. Interview process will be a 30 minute of screening with manager and then one hour of technical round so second follow up interview that will finalize it. Looking for a junior data engineering candidate, minimum of two years' experience not a senior level. Two or three years of engineering experience but at least six months with the GenAI skill set. The person should know a large volume of data handling, SQL database, working with AWS and they do operate some of the databases on the Azure cloud as well so if the candidate has some exposure to Azure but primarily on AWS that's preferable. Transfer and load process like SSIS or big data extraction process like Py Spark or any type of S scripting.
Seeking a GenAI Data Automation Engineer to design and implement innovative, AI-driven automation solutions across AWS and Azure hybrid environments. You will be responsible for building intelligent, scalable data pipelines and automations that integrate cloud services, enterprise tools, and Generative AI to support mission-critical analytics, reporting, and customer engagement platforms.
Ideal candidate is mission focused, delivery oriented, applies critical thinking to create innovative functions and solve technical issues.
Design and maintain data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions. Develop ETL/ELT processes to move data from multiple data systems including DynamoDB ? SQL Server (AWS) and between AWS ? Azure SQL systems. Integrate AWS Connect, Nice inContact CRM data into the enterprise data pipeline for analytics and operational reporting. Engineer, enhance ingestion pipelines with Apache Spark, Flume, Kafka for real-time and batch processing into Apache Solr, AWS Open Search platforms.
Leverage Generative AI services and Frameworks (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to:
Qualifications: BS in Computer Science or related field with 2+ years of data engineering, automation experiences. Hands-on experience with LLM, Generative AI frameworks using AWS Bedrock, Azure OpenAI or open source platform. Hands-on experience with SQL, SSIS, Python, Spark, Bash, Power shell, AWS/Azure CLIs. Experience with AWS services like S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB. Familiarity with Apache Flume, Kafka, Solr for large-scale data ingestion and search. Experience with integrating REST API calls in data pipelines and workflows. Familiarity with JIRA, GitHub / Azure DevOps / Jenkins for SDLC and CI/CD automation. Strong troubleshooting and performance optimization skills in SQL, Spark or other data engineering solutions. Experience operationalizing Generative AI (GenAI Ops) pipelines, including model deployment, monitoring, retraining, and lifecycle management for LLMs and AI-enabled data workflows. Good communication and presentation skills.
Position Details:
Pay Rate / Range: $40-$50.28 The above salary range represents the range expected for the position; however, final salary offers are based on a number of factors such as the position's responsibilities; the candidate's experience, education, and skills; location; travel required; and current market conditions.
Benefits (Regular, Full Time Employees): Medical, Dental, and Vision offerings, Weekly Direct Deposit, Paid Holidays and Personal Time Off, 401(k) with match, Voluntary Life and AD&D, Short / Long Term Disability, plus other voluntary coverages, Pre-Paid Legal and Employee Assistance Programs, Northwest Federal Credit Union Membership, BB&T @ Work Program This program requires US Citizenship
ABBTECH is an EOE/Minorities/Women/Disabled Individuals/Veterans.