All Practice Exams

100+ Free CompTIA SecAI+ Practice Questions

Pass your CompTIA SecAI+ exam on the first try — instant access, no signup required.

✓ No registration✓ No credit card✓ No hidden fees✓ Start practicing immediately
Not published Pass Rate
100+ Questions
100% Free
1 / 100
Question 1
Score: 0/0

Which type of machine learning trains a model on labeled examples so it can predict the label for new inputs?

A
B
C
D
to track
2026 Statistics

Key Facts: CompTIA SecAI+ Exam

60

Maximum Exam Questions

CompTIA SecAI+ CY0-001 exam page

60 min

Exam Duration

CompTIA SecAI+ CY0-001 exam page

600 / 900

Passing Score (scaled)

CompTIA SecAI+ CY0-001 exam page

40%

Securing AI Systems Weight

CompTIA SecAI+ exam objectives

Feb 17, 2026

Launch Date

CompTIA SecAI+ exam page

3 years

Estimated Validity

CompTIA Continuing Education Program

CompTIA SecAI+ (CY0-001) is a 60-question, 60-minute proctored exam launching February 17, 2026, using a scaled passing score of 600 out of 100-900. Candidates are tested on basic AI concepts in cybersecurity, securing AI systems (the heaviest domain at 40 percent), AI-assisted security operations, and AI governance, risk, and compliance. Topics include prompt injection, data poisoning, adversarial examples (FGSM, PGD), model inversion and membership inference, secure MLOps, RAG security, MITRE ATLAS, NIST AI RMF, ISO/IEC 42001, and the EU AI Act risk tiers.

Sample CompTIA SecAI+ Practice Questions

Try these sample questions to test your CompTIA SecAI+ exam readiness. Each question includes a detailed explanation. Start the interactive quiz above for the full 100+ question experience with AI tutoring.

1Which type of machine learning trains a model on labeled examples so it can predict the label for new inputs?
A.Supervised learning
B.Unsupervised learning
C.Reinforcement learning
D.Self-supervised pretraining
Explanation: Supervised learning uses input-output pairs with known labels (for example, emails labeled spam or not spam) so the algorithm can learn a mapping that generalizes to new inputs. Unsupervised learning finds structure without labels, reinforcement learning learns from reward signals, and self-supervised pretraining derives pseudo-labels from the data itself.
2What does an embedding represent in a modern LLM or RAG pipeline?
A.A dense numeric vector that captures the semantic meaning of text
B.An encrypted hash of the document used for integrity
C.The full token sequence stored verbatim in memory
D.A signed JWT that authorizes a model call
Explanation: An embedding is a fixed-length numeric vector produced by a model so that semantically similar inputs land near each other in vector space. RAG systems use embeddings to retrieve the most relevant documents for a query. Embeddings are not encryption, raw tokens, or authentication tokens.
3In a Retrieval-Augmented Generation (RAG) architecture, what is the role of the vector database?
A.Store document embeddings and return the most similar chunks for a query
B.Fine-tune the foundation model weights at runtime
C.Validate user authentication tokens before each LLM call
D.Compress the LLM into a smaller distilled student model
Explanation: A RAG system embeds documents and stores those vectors in a vector database. At query time it embeds the user's question and retrieves the nearest neighbors to inject as context for the LLM. The vector store does not fine-tune the model, authenticate users, or distill weights.
4Which AI artifact is a foundation model that has been further trained on a smaller, task-specific labeled dataset?
A.Fine-tuned model
B.Base model
C.Quantized model
D.Embedding model
Explanation: A fine-tuned model takes a pretrained foundation model and continues training on a narrower labeled dataset to adapt it to a task or domain. A base model is the original pretrained model, quantization compresses weights for inference efficiency, and an embedding model only outputs vectors.
5Which paradigm best describes a chatbot agent that takes actions, observes the result, and adjusts its plan to maximize a long-term reward?
A.Reinforcement learning
B.Logistic regression
C.K-means clustering
D.Naive Bayes classification
Explanation: Reinforcement learning trains an agent to choose actions that maximize cumulative reward in an environment, observing state transitions over time. Logistic regression is a supervised classifier, k-means is unsupervised clustering, and naive Bayes is a probabilistic classifier - none model sequential reward.
6What is the AI threat surface in a modern enterprise LLM deployment?
A.Training data, model weights, prompts, plugins, and agent actions - every layer where untrusted input or output can flow
B.Only the GPU drivers and CUDA libraries used during training
C.Only the public-facing chat interface where users type prompts
D.Only the storage bucket holding training data
Explanation: The AI threat surface spans the full lifecycle: training data integrity, model weight confidentiality and integrity, prompt and context injection, plugin or tool calls, and downstream agent actions. Limiting analysis to GPUs, the chat UI, or training storage misses prompt injection, supply chain, and agent abuse paths.
7Which property of a generative model means it can produce confident-sounding output that is factually wrong?
A.Hallucination
B.Overfitting
C.Quantization loss
D.Tokenization mismatch
Explanation: Hallucination is when an LLM generates plausible but incorrect content because the model is optimizing for likelihood, not truth. Overfitting is memorization of training data, quantization loss is precision loss from weight compression, and tokenization mismatch is a tokenizer compatibility issue - none describe confident wrong answers.
8Which lifecycle step happens during training, not inference?
A.Backpropagation updates weights using gradients computed from a loss function
B.Tokens are sampled from the model's output distribution
C.User context is retrieved from a vector database
D.A response is streamed back to the client over HTTPS
Explanation: Backpropagation is a training-time step where the loss is differentiated with respect to weights and weights are updated. Token sampling, RAG retrieval, and streaming responses are all inference-time activities. Knowing this distinction matters because training-time threats (poisoning) differ from inference-time threats (injection).
9Which AI capability is most often labeled an 'agent' in security literature?
A.An LLM that plans, calls tools, and acts autonomously toward a goal
B.A static text classifier with a fixed label set
C.A pretrained embedding model used only for retrieval
D.A logistic regression deployed behind a REST endpoint
Explanation: An AI agent is an LLM-driven system that decomposes a goal, calls external tools or APIs, observes results, and iterates - often with persistent memory. Classifiers, embedding models, and logistic regression are passive predictors that do not plan or act. OWASP LLM08 (excessive agency) targets exactly this kind of agent.
10What does a model card document?
A.Intended use, training data, evaluation results, limitations, and ethical considerations of a model
B.The on-disk binary format of the saved model weights
C.The CUDA kernel layout and GPU memory map
D.Only the model's hyperparameter values
Explanation: Introduced by Mitchell et al. (2019), a model card publishes intended use cases, training data sources, performance metrics, known limitations, and ethical considerations. It is a governance artifact, not a binary format spec, GPU layout, or hyperparameter dump.

About the CompTIA SecAI+ Exam

CompTIA SecAI+ (CY0-001) validates a security practitioner's ability to secure AI and machine learning systems, defend against adversarial attacks, apply AI-assisted security operations, and govern AI risk through frameworks like NIST AI RMF, ISO/IEC 42001, MITRE ATLAS, and the OWASP Top 10 for LLMs.

Questions

60 scored questions

Time Limit

60 minutes

Passing Score

600 (scale 100-900)

Exam Fee

Not publicly disclosed (CompTIA)

CompTIA SecAI+ Exam Content Outline

17%

Basic AI Concepts Related to Cybersecurity

Foundational ML/AI types, training and inference pipelines, supervised vs. unsupervised vs. reinforcement learning, LLM and generative AI fundamentals, embeddings, RAG, and the AI threat surface.

40%

Securing AI Systems

Adversarial attacks (FGSM, PGD), data poisoning, model inversion, membership inference, model theft, prompt injection (direct and indirect), OWASP Top 10 for LLMs, secure MLOps pipelines, secrets management, and AI supply chain security.

24%

AI-Assisted Security

Using AI for SOC alert triage, log and phishing analysis, threat intelligence enrichment, automated incident response, and AI-assisted code review while managing model bias, drift, and false positives.

19%

AI Governance, Risk, and Compliance

NIST AI RMF, ISO/IEC 42001, EU AI Act risk tiers, MITRE ATLAS adversarial tactics, AI red-teaming, model cards, data minimization, RBAC for training data, and acceptable-use policies.

How to Pass the CompTIA SecAI+ Exam

What You Need to Know

  • Passing score: 600 (scale 100-900)
  • Exam length: 60 questions
  • Time limit: 60 minutes
  • Exam fee: Not publicly disclosed

Keys to Passing

  • Complete 500+ practice questions
  • Score 80%+ consistently before scheduling
  • Focus on highest-weighted sections
  • Use our AI tutor for tough concepts

CompTIA SecAI+ Study Tips from Top Performers

1Memorize the OWASP Top 10 for LLMs by code: LLM01 prompt injection, LLM02 insecure output handling, LLM03 training data poisoning, LLM04 model denial of service, LLM05 supply chain, LLM06 sensitive info disclosure, LLM07 insecure plugin design, LLM08 excessive agency, LLM09 overreliance, LLM10 model theft.
2Distinguish direct prompt injection (attacker types into the prompt) from indirect injection (malicious instructions hidden in retrieved content, web pages, or RAG sources) - exam scenarios test the difference.
3Know the four canonical adversarial attack categories: evasion (FGSM, PGD on inference), poisoning (training-time backdoors), model extraction or theft (query-based reconstruction), and model inversion or membership inference (leaking training data).
4Be fluent with MITRE ATLAS as the AI-equivalent of MITRE ATT&CK - tactics like reconnaissance, ML model access, ML attack staging, and exfiltration appear in scenario questions.
5Map the NIST AI RMF four functions (Govern, Map, Measure, Manage) and the EU AI Act risk tiers (unacceptable, high, limited, minimal) - governance questions hinge on choosing the right framework or tier.
6Practice secure MLOps controls: signed models, model registry RBAC, data lineage, secrets in Vault or KMS, isolated training environments, and drift monitoring in production.
7Understand RAG security: prompt injection through retrieved chunks, embedding inversion attacks, access control on the vector store, and PII redaction before embedding.
8Treat AI-assisted SOC use cases (alert triage, phishing detection, log summarization) as both a capability and a risk - know when an AI assistant introduces hallucination, bias, or data leakage.

Frequently Asked Questions

What is on the CompTIA SecAI+ CY0-001 exam?

SecAI+ tests four domains: basic AI concepts in cybersecurity (17 percent), securing AI systems (40 percent), AI-assisted security (24 percent), and AI governance, risk, and compliance (19 percent). Core topics include prompt injection, data poisoning, adversarial examples (FGSM, PGD), model inversion and membership inference, OWASP Top 10 for LLMs, secure MLOps, MITRE ATLAS, NIST AI RMF, and ISO/IEC 42001.

How long is the CompTIA SecAI+ exam and how many questions does it have?

CompTIA SecAI+ is a maximum of 60 questions delivered in 60 minutes. The format combines multiple-choice and performance-based items, consistent with other CompTIA cybersecurity exams. Candidates receive a score on the standard CompTIA scaled range of 100 to 900.

What is the passing score for SecAI+?

The passing score for CompTIA SecAI+ is 600 on a scale of 100 to 900. CompTIA uses scaled scoring rather than a raw percentage, so individual question weights vary and the exact percentage of correct answers needed depends on the form delivered.

How much does the SecAI+ exam cost?

CompTIA has not publicly disclosed the SecAI+ CY0-001 exam fee at this time. CompTIA typically prices its security-track certifications in the $400 USD range, but candidates should confirm current pricing on the official CompTIA store before booking.

Who should take CompTIA SecAI+?

CompTIA recommends 3 to 4 years in IT including at least 2 years of hands-on cybersecurity experience, plus a foundation such as Security+, CySA+, or PenTest+. SecAI+ is aimed at SOC analysts, security engineers, and AI/ML platform engineers who need to secure AI workloads or use AI in security operations.

How is SecAI+ different from Security+ or CySA+?

Security+ covers general cybersecurity fundamentals, and CySA+ focuses on threat detection and SOC analysis. SecAI+ is purpose-built for AI security: prompt injection, data poisoning, adversarial ML, secure MLOps, MITRE ATLAS, OWASP Top 10 for LLMs, and AI governance. It complements rather than replaces Security+ and CySA+.