Senior Artificial Intelligence Research Engineer
Vanderbilt University
Position Summary:
The Senior Artificial Intelligence Research Engineer is part of the Wicked Problems Laboratory within the Institute of National Security at Vanderbilt University and is a key technical contributor focused on advancing research and development in agentic AI frameworks, large-scale data systems, and AI-driven analytic tools for national security applications. This position is deeply technical, emphasizing the design, construction, testing, and optimization of systems that integrate diverse APIs, large language models (LLMs), and data pipelines to support adaptive, mission-critical workflows.
The role blends advanced AI engineering with applied research execution—designing, programming, and coordinating agentic AI architectures, developing analytic tools, and evaluating emerging threats. Strong capability in AI model development, secure system design, and generative AI analysis is essential, particularly in contexts shaped by adversarial pressures and evolving national security challenges.
Reporting to the Head of the Wicked Problems Lab, the Senior AI Research Engineer collaborates with research scientists, engineers, faculty, and external partners to assess threat landscapes, strengthen cyber resiliency, and accelerate the adoption of secure, advanced technologies across government, industry, and public-sector infrastructures.
Key Functions and Expected Performance:
Lead and support AI research activities, including training and tuning small- to medium-sized large language models and coordinating multi-agent AI systems to advance research objectives.
Conduct AI-driven threat analysis to produce actionable security insights.
Support the integration, evaluation, and testing of AI capabilities in laboratory and partner operational environments.
Develop high-quality documentation—including system diagrams, test results, and sponsor deliverables—to support technology maturation and transition.
Maintain expertise in emerging AI-driven threats, adversarial models, and advanced AI technologies to shape research direction.
Collaborate with faculty, interdisciplinary teams, and external partners to support grant objectives, publications, prototypes, and demonstrations.
Present technical insights through briefings, reports, and presentations to academic, industry, and operational audiences.
Support strategic relationships with government and industry partners while safeguarding research data, systems, and intellectual property.
Supervisory Relationships:
This position does not have supervisory responsibility; this position reports administratively and functionally to the Executive Director of the Institute of National Security.
Education and Certifications:
Bachelor’s degree in Computer Science, Computer Engineering, Data Science, or a related technical field is required.
Master’s degree in related field is preferred.
Required Experience and Skills:
3–5+ years of hands-on AI engineering experience in enterprise, research, or critical-infrastructure environments.
Direct experience training and tuning foundation models, including parameter-efficient techniques (LoRA, QLoRA, adapters), dataset construction, and evaluation across benchmarks.
Practical expertise integrating LLMs with external tools and APIs, including retrieval systems, vector databases, function calling, or multi-agent orchestration.
Expert-level Python proficiency, including designing and optimizing complex AI research pipelines, building high-performance training and inference systems, and developing secure, production-quality tooling in modern Python ecosystems (PyTorch, Hugging Face, Ray, vLLM).
Strong software engineering capability, including implementing research-grade systems, experiment frameworks, structured logging, and reproducible workflows.
Experience building secure data and model pipelines, including preprocessing, evaluation, and monitoring in sensitive or adversarial environments.
Applied adversarial AI or threat-focused experience, such as red-teaming LLMs, vulnerability analysis, or evaluating model behavior under adversarial conditions.
Ability to design and execute end-to-end experiments, from rapid prototyping through operational recommendations.
Strong technical communication skills, including producing high-quality documentation and sponsor-facing deliverables.
Preferred Experience & Skills
Hands-on work with AI–cyber hybrid systems, such as automated threat detection, network traffic analysis, or AI-augmented defense tooling.
Knowledge of large-scale data systems, distributed computing, GPU optimization, or containerized AI environments (Docker, Kubernetes).
Experience building or evaluating agentic AI systems—multi-agent architectures, planning/decision-making agents, or autonomous workflows.
Experience implementing or evaluating adversarial ML techniques, including poisoning, evasion, jailbreaks, or secure-LLM hardening.
Security Clearance
Eligibility for a U.S. security clearance is strongly preferred.
At Vanderbilt University , our work - regardless of title or role - is in service to an important and noble mission in which every member of our community serves in advancing knowledge and transforming lives on a daily basis. Located in Nashville, Tennessee, on a 330+ acre campus and arboretum dating back to 1873, Vanderbilt is proud to have been named as one of “America’s Best Large Employers” as well as a top employer in Tennessee and the Nashville metropolitan area by Forbes for several years running. We welcome those who are interested in learning and growing professionally with an employer that strives to create, foster and sustain opportunities as an employer of choice.
We understand you have a choice when choosing where to work and pursue a career. We understand you are unique and have a story. We want to hear it. We encourage you to apply today so that you might become a part of our story.