Robustness Analysis Researcher

Duration: Permanent

Payrate: 180K$-200K$

Technical/Functional Skills-

  • Strong understanding of machine learning principles, especially in the context of LLMs.
  • Experience in identifying vulnerabilities, anomaly detection, or red teaming.
  • Expertise on bias, discrimination, or other safety issues in AI and ML systems.
  • Strong verbal and written communications skills with the ability to work effectively across internal and external organizations and virtual teams.
  • Advanced degree in computer science, machine learning, statistics, or related field (preferred).

Skills:    LLMs, Anomaly Detection, Red Teaming

Responsibility-

  • Identify potential risks and vulnerabilities of LLM features and how those may differ across populations and model input types.
  • Evaluate potential risks and vulnerabilities by red teaming (i.e., trying to elicit harmful outputs), as well as collecting data and running experiments.
  • Assist other team members and testers in offensive techniques and approaches?to scale AI red teaming
  • Work with stakeholders to mitigate risks and perform testing to ensure progress.
  • Research new and emerging threats to inform the organization including prompt injection, improve red teaming efficacy and accuracy, and stay relevant.?