security-threat-modelот openai
Repository-grounded threat modeling that maps trust boundaries, assets, and abuse paths to concrete code evidence. Enumerates entry points, data flows, and trust boundaries anchored to actual repository structure and configuration Derives realistic attacker goals tied to specific assets (credentials, PII, integrity-critical state, compute resources) rather than generic checklists Prioritizes threats using likelihood and impact reasoning, with explicit assumptions about deployment,...
npx skills add https://github.com/openai/skills --skill security-threat-modelБольше skills от openai
commit
by openai
Create a well-formed git commit from current changes using session history for
yeet
by openai
Publish local changes to GitHub by confirming scope, committing intentionally, pushing the branch, and opening a draft PR through the GitHub app from this…
codex-cli-runtime
by openai
Internal helper contract for calling the codex-companion runtime from Claude Code
codex-result-handling
by openai
Internal guidance for presenting Codex helper output back to the user
gpt-5-4-prompting
by openai
Internal guidance for composing Codex and GPT-5.4 prompts for coding, review, diagnosis, and research tasks inside the Codex Claude Code plugin
babysit-pr
by openai
Babysit a GitHub pull request after creation by continuously polling review comments, CI checks/workflow runs, and mergeability state until the PR is…
code-breaking-changes
by openai
Breaking changes
code-review
by openai
Run a final code review on a pull request