Content filters ensure CurricuLLM only provides safe, age-appropriate, and curriculum-aligned responses for different groups of users.
| Category | What It Covers |
|---|---|
| Violent Crimes | Encouraging or describing violent or criminal acts. |
| Non-Violent Crimes | Crimes without physical harm (e.g. fraud, hacking, theft). |
| Sex Crimes | Criminal sexual behaviour, such as assault. |
| Child Exploitation | Any sexual content involving minors. |
| Defamation | False statements harming a person’s reputation. |
| Specialized Advice | Expert-level legal, medical, or financial instructions. |
| Privacy | Sharing personal or identifying information. |
| Intellectual Property | Copyright or trademark infringement. |
| Indiscriminate Weapons | Instructions for creating weapons of mass destruction. |
| Hate | Hate speech or discrimination against protected groups. |
| Self-Harm | Encouragement or facilitation of self-harm or suicide. |
| Sexual Content | Explicit or erotic content involving adults. |
| Elections | False or misleading information about elections. |
| Code Interpreter Abuse | Attempts to misuse system capabilities (e.g. security bypasses). |
| Profanity | Use of offensive or vulgar language, even if not linked to other categories. |
Content filters are the backbone of safe use. By setting them correctly, schools can trust that every interaction stays age-appropriate, safe, and aligned with teaching and learning goals.