Jobs
My ads
My job alerts
Sign in
Find a job Employers
Find

Ai inference engineer

Dublin
F5
Engineer
Posted: 30 April
Offer description

At F5, we strive to bring a better digital world to life. Our teams empower organizations across the globe to create, secure, and run applications that enhance how we experience our evolving digital world. We are passionate about cybersecurity, from protecting consumers from fraud to enabling companies to focus on innovation. Everything we do centers around people. That means we obsess over how to make the lives of our customers, and their customers, better. And it means we prioritize a diverse F5 community where each individual can thrive.
The AI Inference Engineer plays a critical role in the AI lifecycle by bridging the gap between high-performance model development and optimized deployment environments. This position focuses on optimizing Large Language Models (LLMs) for inference, serving diverse environments—from GPU‑rich data centers to resource‑constrained edge devices—with a strong emphasis on maximizing throughput, minimizing latency, and maintaining model accuracy. This role is pivotal in advancing F5’s AI capabilities, ensuring enterprise‑grade reliability by leveraging hardware acceleration, designing scalable infrastructure, and monitoring system performance.
Key Responsibilities
High-Performance AI Serving

Build and maintain robust inference engines using tools like vLLM, TGI (Text Generation Inference), and NVIDIA Triton, ensuring high performance at scale.
Handle deployment optimizations to deliver low‑latency AI serving solutions for multiple business applications.

Hardware Acceleration and Optimization

Profile and optimize models for specialized hardware backends, including NVIDIA GPUs (CUDA/TensorRT), Apple Silicon (CoreML), and AI accelerators like TPUs and LPUs.
Collaborate with hardware teams to maximize utilization and performance across various computational environments.

Inference Orchestration and Scalability

Design and implement auto‑scaling architectures for online (real‑time) and batch inference pipelines, leveraging Kubernetes for inference routing and orchestration.
Ensure software solutions are optimized for peak performance during traffic spikes, maintaining reliability and scalability.

Performance Monitoring and Observability

Establish robust observability frameworks to monitor Time to First Token (TTFT), tokens per second, and memory bandwidth utilization against service‑level agreements (SLAs).
Build and execute performance and load testing suites to identify bottlenecks and ensure consistent reliability at scale.

Technical Requirements
Required Skills

Programming Languages: Proficiency in Python, C++, Rust, or Golang specifically for high‑performance AI workflows.
Inference Tools: Proven hands‑on experience with tools like vLLM, TensorRT, Llama.cpp, and Ollama for inference development and optimization.
Infrastructure Expertise: Strong familiarity with infrastructure technologies, including Docker, Kubernetes, and cloud platforms such as AWS, GCP, and Azure.
Hardware Optimization Expertise: Comprehensive understanding of GPU and AI hardware, including techniques for profiling and optimizing performance for accelerators like NVIDIA GPUs and TPUs.

Preferred Experience

Prior experience deploying Large Language Models (LLMs) with advanced techniques like Speculative Decoding or PagedAttention.
Contributions to open‑source inference libraries or hardware‑level kernel development (e.g., CUDA, Triton kernels).
Background in MLOps or SRE roles focused on high‑performance AI endpoints and reliability during demand surges.
Proficiency in designing scalable solutions for high‑throughput inference environments optimized for traffic bursts.

Success Metrics (KPIs)

Latency Reduction: Continuously improve inference latency metrics, ensuring minimal Time to First Token (TTFT) and maximum tokens per second.
Cost Efficiency: Achieve lower "Cost per 1K Tokens" through better resource utilization and hardware optimization.
Scalability: Maintain system stability and reliability during traffic spikes, ensuring performance consistency across environments.
Throughput Maximization: Deploy models optimized for peak hardware usage and maximized process throughput.

Why Join F5?

Collaborating with cutting‑edge technologies and hardware solutions to support real‑time AI applications.
Advancing your career in a fast‑paced, multidisciplinary environment focused on innovation, scalability, and problem‑solving.
Driving transformative projects that deliver real‑time AI reliability to global customers while maintaining cost and efficiency standards.
Working on advanced MLOps solutions that seamlessly scale enterprise AI systems and shape the future of intelligent deployment.

What Success Looks Like:

Combine technical expertise and problem‑solving skills to deliver low‑latency, scalable, and high‑performing AI prediction systems.
Collaborate efficiently across cross‑functional teams, participating in knowledge sharing and system refinement.
Demonstrate initiative by driving optimizations across hardware, tools, and orchestration processes, balancing immediate solutions with long‑term architectural goals.
Translate complex AI and inference workflows into practical solutions that align with F5's strategic objectives.

Equal Employment Opportunity
It is the policy of F5 to provide equal employment opportunities to all employees and employment applicants without regard to unlawful considerations of race, religion, color, national origin, sex, sexual orientation, gender identity or expression, age, sensory, physical, or mental disability, marital status, veteran or military status, genetic information, or any other classification protected by applicable local, state, or federal laws. This policy applies to all aspects of employment, including, but not limited to, hiring, job assignment, compensation, promotion, benefits, training, discipline, and termination. F5 offers a variety of reasonable accommodations for candidates. Requesting an accommodation is completely voluntary. F5 will assess the need for accommodations in the application process separately from those that may be needed to perform the job. Request by contacting accommodations@f5.com.
#J-18808-Ljbffr

Apply
Create an E-mail Alert
Job alert activated
Saved
Save
Similar job
Lift service/call out engineer
Dublin
Permanent
WR Engineering
Engineer
£45,000 - £50,000 a year
Similar job
Bim engineer
Dublin
Permanent
BIM Recruiter
Engineer
£34,600 - £51,800 a year
Similar job
Associate sim engineer — iot/esim & learning
Dublin
Cubic³
Engineer
Similar jobs
Engineering jobs in Dublin
jobs Dublin
jobs County Dublin
jobs Leinster
Home > Jobs > Engineering jobs > Engineer jobs > Engineer jobs in Dublin > AI Inference Engineer

About Jobijoba

  • Company Reviews

Search for jobs

  • Jobs by Job Title
  • Jobs by Industry
  • Jobs by Company
  • Jobs by Location

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobijoba - All Rights Reserved

Apply
Create an E-mail Alert
Job alert activated
Saved
Save