Course syllabus
AISec – The Chalmers AI and Security Course
PhD Course, 7.5 credits, VT26 SP4 (March – June 2026)
- Language: English
- Max. number of students: 20 (PhD students have priority)
- Registration deadline: general SP4 25/26 course registration deadline
- Teacher: Sandro Stucki
Overview
The past decade has seen a steep rise in the use of machine learning (ML) fueled by developments in deep learning and generative AI (GenAI). The rapid evolution and adoption of these techniques brings unique opportunities and challenges, not least for cybersecurity. This course gives an overview of current state of AI security: the special challenges and novel methods for securing AI systems (security for AI), as well the role of ML/AI in doing so (AI for security). The main focus of the course is on GenAI and AI agents – how they change the threat landscape and what to do about it. The core lectures are complemented by guest lectures on a range of topics connecting ML to privacy and security. Evaluation consists in quizzes, labs and a final course project.
The course is primarily aimed at PhD students, but we may accept a limited number of master’s students, provided they fulfill the course prerequisites.
Schedule
See TimeEdit for a detailed schedule.
Lectures are given in person at campus, and will not be streamed.
A table with an overview of the lectures, important deadlines and office hours can be found on the main Canvas page.
Prerequisites
- Prior experience completing programming assignments or projects:
- good programming skills (ideally in Python),
- familiarity with version control (ideally Git), and
- working in a Unix-style shell (ideally Linux/macOS).
- For master’s students: completed 7.5hp of programming courses.
- Basic knowledge of machine learning.
- For master’s students: successful completion of a course such as
- DAT566/DAT695/DIT408 Introduction to Data Science and AI,
- DAT341/DIT867 Applied machine learning,
- TDA233/DIT382 Algorithms for machine learning and inference,
- DAT441/DIT471 Advanced topics in machine learning,
- or similar.
- For master’s students: successful completion of a course such as
- Basic knowledge of computer security.
- For master’s students: successful completion of one of the courses from the Chalmers/GU Security Specialization (or equivalent).
Learning Outcomes
After completion of the course the student should be able to:
Knowledge and understanding
- explain how AI/ML components extend the threat surface of a system;
- describe common attacks against AI-powered systems and possible mitigations;
Skills and abilities
- implement basic safety mechanisms in an AI system;
- evaluate an AI system in a controlled setting;
Judgement and approach
- critically assess scientific publications on AI and security;
- develop, document and present a technical project on AI and security following best practices.
Organization
The course includes one or two lectures per week, some of which will be given by guest lecturers. (See detailed schedule above.) Four of the lectures will feature mandatory quizzes on the material covered thus far. Lectures will not be streamed or recorded, but slides will be made available.
There will be two programming assignments (labs) and a final course project. The labs and project can be done individually or in pairs.
Together, the quizzes, labs and a final course project form the examination of the course.
Course literature
The course literature consists of course slides and recent articles from the research literature. For each lecture, slides and links to the relevant material will be posted on a separate lecture page (linked to from the schedule).
Examination and grading
To pass the course, students must successfully complete
- the quizzes,
- the labs, and
- the final course project.
The quizzes and labs are pass/fail. To pass a quiz or lab, students must obtain at least 60% of the points. Students must pass all quizzes and labs to pass the course.
The course project will be scored using the usual Chalmers grading scale: U, 3, 4 and 5, with 3–5 being passing grades. Grading criteria are described on the course project page.
The final grade will be the the grade obtained for the project (or U if the student failed any of the quizzes or labs).
Note: The course examiner may assess individual students in other ways than what is stated above if there are special reasons for doing so, for example if a student has a decision from Chalmers about disability study support.
Who is the examiner?
- For PhD students: this is an individual reading course. After you have completed the course, your PhD examiner will formally decide whether you passed the course and obtain credits. To avoid surprises, it is a good idea to inform your supervisor and examiner ahead of time that you are planning to take the course.
- For master’s students: this is an instance of the Research-oriented course in Computer Science and Engineering (DAT235/DIT577) with Ana Bove as examiner. If you have not already done so, email Ana (with Sandro in CC) to formally register for the course. Note that you cannot get credit for the course if you already have a result for a different instance of DAT235/DIT577.
Code of conduct
Students are expected to be familiar with their rights and obligations, including Chalmers' policies on academic integrity and honesty and rules for the use of IT resources. Importantly, while we study methods to attack and defend IT systems in this course, students must strictly adhere to the rules for using IT resources. The obligations also detail the rules for cheating. Cheating includes undisclosed collaboration between groups and not citing your sources.
Use of generative AI in course work
By its very nature, this course will involve the use of (generative) AI tools. The use of such tools for solving course assignments is generally permitted as long as it is in accordance with the instructions of the assignment (lab or course project). However, AI tools must not be used to generate solutions that students are expected to produce independently (code, text), unless such use has been explicitly permitted by the course responsible.
Examples of permitted use of AI tools:
- running benchmarks on an Ollama model following the instructions in a lab assignment,
- improving grammar, spelling, or clarity in text already written by the student,
- finding relevant libraries, tools, datasets, benchmarks, or documentation,
- obtaining administrative or formatting help,
- generating illustrations for slides (and citing the model used).
Examples of prohibited use of AI tools:
- writing or transforming lab or project code (unless doing so is an explicit goal of the project),
- generating report text, slides, or speaker notes,
- proposing analyses, interpretations, or conclusions,
- generating solutions to lab tasks that the student was supposed to solve independently,
- producing attack ideas, defenses, proofs, explanations, or evaluations that the student is expected to develop independently.
When in doubt, check with the course responsible!
Course summary:
| Date | Details | Due |
|---|---|---|