View All Jobs 151271

Staff Engineer

Build scalable enterprise AI solutions using LLMs and vector databases on AWS cloud
Bangalore
Senior
20 hours agoBe an early applicant
Marvell

Marvell

A leading semiconductor company specializing in storage, processing, networking, security, and connectivity solutions.

AI Enablement Team Developer

Marvell's semiconductor solutions are the essential building blocks of the data infrastructure that connects our world. Across enterprise, cloud and AI, automotive, and carrier architectures, our innovative technology is enabling new possibilities.

At Marvell, you can affect the arc of individual lives, lift the trajectory of entire industries, and fuel the transformative potential of tomorrow. For those looking to make their mark on purposeful and enduring innovation, above and beyond fleeting trends, Marvell is a place to thrive, learn, and lead.

MBE group is responsible for solutions, platforms and software for both the Multimarket Business Group (MBG) and Custom Cloud Solutions (CCS). The team focus is to help the organization adopt and scale AI. Our team will be the innovation + AI enablement hub within Marvell. We make sure developers, testers, and business teams can easily apply AI in their daily work. This team gets to work on real-world AI problems. Some of the projects we currently deploy include AI code review, code coverage, auto-remediation, operational efficiency improvements. We work with cutting-edge AI tools (AWS Bedrock, n8n, Glean, RAG, Cursor, etc.). We have a strong learning environment: you'll pick up MLOps, orchestration, cloud, and applied AI—skills that are valuable everywhere. Every day, we connect developers, operations, and business teams to the right AI solutions. AI Enablement Team is not just plugging in off-the-shelf AI tools, but actually building custom solutions for the company's unique problems.

What You Can Expect

  • Design and implement end-to-end AI solutions across infrastructure, model, and orchestration layers.
  • Develop LLM-based applications using Bedrock-integrated models (Mixtral, Anthropic, Llama 2, etc.).
  • Build and optimize RAG pipelines with vector databases (FAISS, Elastic, Chroma) for intelligent retrieval.
  • Utilize AWS SageMaker for ML model training, fine-tuning, and deployment.
  • Write effective prompts and apply prompt engineering best practices to maximize LLM performance.
  • Develop and maintain REST APIs, dashboards, UI applications, and automation pipelines.
  • Use LangChain and LangGraph for LLM orchestration and multi-agent workflows.
  • Implement Agentic AI workflows with n8n and related frameworks.
  • Collaborate with cross-functional teams to integrate AI solutions into CI/CD pipelines and enterprise systems.
  • Ensure solutions meet standards for scalability, security, and performance

What We're Looking For

  • 7–9 years of experience in AI/ML application development and backend engineering.
  • Strong hands-on expertise in:
    • Infrastructure Layer: AWS Cloud services (Bedrock, SageMaker, Lambda, S3, IAM, API Gateway).
    • Model Layer: LLMs via Bedrock (Mixtral, Anthropic, Llama 2, etc.), fine-tuning and inference.
    • Frameworks: Python (FastAPI, Flask, ML/NLP libraries like HuggingFace, spaCy).
    • LangChain & LangGraph for orchestration of complex AI workflows.
    • Agentic AI platform/frameworks: glean, n8n, AutoGen, Langgraph, crewAI.
    • Vector Databases: FAISS, Elastic, Chroma.
    • RAG implementation for enterprise-grade retrieval-augmented LLM applications.
    • NLP techniques: text classification, summarization, embeddings, entity recognition.
    • Backend Development: REST APIs, dashboards, UI integration, automation pipelines.
    • Code generation tools: co-pilot, cursor
    • Proven ability to deploy and manage AI workloads on AWS Cloud
+ Show Original Job Post
























Staff Engineer
Bangalore
Engineering
About Marvell
A leading semiconductor company specializing in storage, processing, networking, security, and connectivity solutions.