diff --git a/examples/README.md b/examples/README.md
index 5e827dd..948a0a0 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -1,9 +1,9 @@
# Examples
-| Project Name | Project Type | Security Design | Threat Modeling |
-| --- | ---| --- | --- |
-| [caddy](https://github.com/caddyserver/caddy) - Fast and extensible multi-platform HTTP/1-2-3 web server with automatic HTTPS
command...
`python ai_security_analyzer/app.py -t caddy/ -v --project-type go -o examples/CADDY-o1-preview.md --agent-model o1-preview --agent-temperature 1` | go | [o1-preview](./CADDY-o1-preview.md), [gpt-4o](./CADDY-gpt-4o.md) | [threat model from design - o1-preview](./TM-FROM-DESIGN-CADDY-o1-preview.md) |
-| [screenshot-to-code](https://github.com/abi/screenshot-to-code) - Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)
command...
`python ai_security_analyzer/app.py -t screenshot-to-code/ -v -o examples/SCREENSHOT-TO-CODE-o1-preview.md --agent-model o1-preview --agent-temperature 1` | python | [o1-preview](./SCREENSHOT-TO-CODE-o1-preview.md), [gpt-4o](./SCREENSHOT-TO-CODE-gpt-4o.md) | [o1-preview](./TM-SCREENSHOT-TO-CODE-o1-preview.md), [from design - o1-preview](./TM-SCREENSHOT-TO-CODE-o1-preview.md) |
-| [requests](https://github.com/psf/requests) - A simple, yet elegant, HTTP library
command...
`python ai_security_analyzer/app.py -t requests/ -v --exclude "**/ISSUE_TEMPLATE*,**/CODE_OF_CONDUCT.md,**/CONTRIBUTING.md,**/FUNDING.yml" --include "**/*.cfg,**/*.rst" -o examples/REQUESTS-o1-preview.md --agent-model o1-preview --agent-temperature 1` | python | [gpt-4o](./REQUESTS-gpt-4o.md), [o1-preview](./REQUESTS-o1-preview.md) | [o1-preview](./TM-REQUESTS-o1-preview.md), [from design - o1-preview](./TM-REQUESTS-o1-preview.md) |
-| [flask](https://github.com/pallets/flask) - The Python micro framework for building web applications
command...
`python ai_security_analyzer/app.py -t flask/ -v --exclude "**/pull_request_template.md,**/ISSUE_TEMPLATE*,**/CODE_OF_CONDUCT.md" --include "**/requirements/*.txt,**/*.rst" -o examples/FLASK-o1-preview.md --agent-model o1-preview --agent-temperature 1` | python | [gpt-4o](./FLASK-gpt-4o.md), [o1-preview](./FLASK-o1-preview.md) | [o1-preview](./TM-FLASK-o1-preview.md), [from design - o1-preview](./TM-FROM-DESIGN-FLASK-o1-preview.md) |
-| [fabric-agent-action](https://github.com/xvnpw/fabric-agent-action) - A GitHub action that leverages fabric patterns through an agent-based approach
command...
`python ai_security_analyzer/app.py -v -t fabric-agent-action/ --exclude "**/prompts/**" -o examples/FABRIC-AGENT-ACTION-o1-preview.md --agent-model o1-preview --agent-temperature 1` | [o1-preview](./FABRIC-AGENT-ACTION-o1-preview.md) | [o1-preview](./TM-FABRIC-AGENT-ACTION-o1-preview.md), [gpt-4o](./TM-FABRIC-AGENT-ACTION-gpt-4o.md) |
+| Project Name | Project Type | Security Design | Threat Modeling | Threat Model from Security Design (using `create_stride_threat_model`) |
+| --- | ---| --- | --- | --- |
+| [caddy](https://github.com/caddyserver/caddy) - Fast and extensible multi-platform HTTP/1-2-3 web server with automatic HTTPS
command...
`python ai_security_analyzer/app.py -t caddy/ -v --project-type go -o examples/CADDY-o1-preview.md --agent-model o1-preview --agent-temperature 1` | go | [o1-preview](./CADDY-o1-preview.md), [gpt-4o](./CADDY-gpt-4o.md) | - | [o1-preview](./TM-FROM-DESIGN-CADDY-o1-preview.md) |
+| [screenshot-to-code](https://github.com/abi/screenshot-to-code) - Drop in a screenshot and convert it to clean code (HTML/Tailwind/React/Vue)
command...
`python ai_security_analyzer/app.py -t screenshot-to-code/ -v -o examples/SCREENSHOT-TO-CODE-o1-preview.md --agent-model o1-preview --agent-temperature 1` | python | [o1-preview](./SCREENSHOT-TO-CODE-o1-preview.md), [gpt-4o](./SCREENSHOT-TO-CODE-gpt-4o.md) | [o1-preview](./TM-SCREENSHOT-TO-CODE-o1-preview.md) | [o1-preview](./TM-FROM-DESIGN-SCREENSHOT-TO-CODE-o1-preview.md) |
+| [requests](https://github.com/psf/requests) - A simple, yet elegant, HTTP library
command...
`python ai_security_analyzer/app.py -t requests/ -v --exclude "**/ISSUE_TEMPLATE*,**/CODE_OF_CONDUCT.md,**/CONTRIBUTING.md,**/FUNDING.yml" --include "**/*.cfg,**/*.rst" -o examples/REQUESTS-o1-preview.md --agent-model o1-preview --agent-temperature 1` | python | [gpt-4o](./REQUESTS-gpt-4o.md), [o1-preview](./REQUESTS-o1-preview.md) | [o1-preview](./TM-REQUESTS-o1-preview.md) | [o1-preview](./TM-FROM-DESIGN-REQUESTS-o1-preview.md) |
+| [flask](https://github.com/pallets/flask) - The Python micro framework for building web applications
command...
`python ai_security_analyzer/app.py -t flask/ -v --exclude "**/pull_request_template.md,**/ISSUE_TEMPLATE*,**/CODE_OF_CONDUCT.md" --include "**/requirements/*.txt,**/*.rst" -o examples/FLASK-o1-preview.md --agent-model o1-preview --agent-temperature 1` | python | [gpt-4o](./FLASK-gpt-4o.md), [o1-preview](./FLASK-o1-preview.md) | [o1-preview](./TM-FLASK-o1-preview.md) | [o1-preview](./TM-FROM-DESIGN-FLASK-o1-preview.md) |
+| [fabric-agent-action](https://github.com/xvnpw/fabric-agent-action) - A GitHub action that leverages fabric patterns through an agent-based approach
command...
`python ai_security_analyzer/app.py -v -t fabric-agent-action/ --exclude "**/prompts/**" -o examples/FABRIC-AGENT-ACTION-o1-preview.md --agent-model o1-preview --agent-temperature 1` | python | [o1-preview](./FABRIC-AGENT-ACTION-o1-preview.md) | [o1-preview](./TM-FABRIC-AGENT-ACTION-o1-preview.md), [gpt-4o](./TM-FABRIC-AGENT-ACTION-gpt-4o.md) | [o1-preview](./TM-FROM-DESIGN-FABRIC-AGENT-ACTION-o1-preview.md)
diff --git a/examples/TM-FROM-DESIGN-FABRIC-AGENT-ACTION-o1-preview.md b/examples/TM-FROM-DESIGN-FABRIC-AGENT-ACTION-o1-preview.md
new file mode 100644
index 0000000..d69ce89
--- /dev/null
+++ b/examples/TM-FROM-DESIGN-FABRIC-AGENT-ACTION-o1-preview.md
@@ -0,0 +1,96 @@
+## ASSETS
+
+The following assets require protection within the **Fabric Agent Action** system:
+
+1. **API Keys for LLM Providers**: High sensitivity; unauthorized access could lead to misuse and financial costs.
+2. **Source Code and Workflow Configurations**: Medium sensitivity; contains logic and secrets that could be exploited.
+3. **User Input Data**: Medium sensitivity; may contain proprietary or confidential information.
+4. **Integrity of Automated Workflows**: Ensuring workflows execute correctly without unauthorized modifications.
+5. **GitHub Secrets**: Includes sensitive information like API keys and tokens.
+6. **Action Execution Environment**: The GitHub Actions Runner where the action executes.
+7. **Communication Data with LLM Providers**: Data sent to external LLM services.
+
+## TRUST BOUNDARIES
+
+The trust boundaries within the system are as follows:
+
+1. **User and GitHub Platform**: Boundary between the developer (user) and the GitHub platform.
+2. **GitHub Platform and GitHub Actions Runner**: Boundary between the GitHub platform and the runner executing actions.
+3. **GitHub Actions Runner and External LLM Provider**: Boundary between the runner (trusted environment) and the external LLM provider's API.
+4. **Fabric Agent Action and LLM Provider API**: Boundary when the action communicates with the LLM API.
+5. **Public Repositories and External Contributors**: Boundary between the repository and external users (e.g., contributors via pull requests).
+6. **GitHub Actions Runner and Internet**: Boundary between the runner and external networks.
+
+## DATA FLOWS
+
+1. **DF1: User Triggers Workflow**
+ - **From**: User (Developer)
+ - **To**: GitHub Platform
+ - **Description**: User commits code or comments to trigger the workflow.
+ - **Crosses Trust Boundary**: Yes (User ↔ GitHub Platform)
+
+2. **DF2: GitHub Triggers Action Execution**
+ - **From**: GitHub Platform
+ - **To**: GitHub Actions Runner (Fabric Agent Action)
+ - **Description**: GitHub initiates the action on the runner.
+ - **Crosses Trust Boundary**: Yes (GitHub Platform ↔ GitHub Actions Runner)
+
+3. **DF3: Fabric Agent Action Calls LLM Provider API**
+ - **From**: Fabric Agent Action (GitHub Actions Runner)
+ - **To**: LLM Provider API
+ - **Description**: The action sends requests to the LLM API and receives responses.
+ - **Crosses Trust Boundary**: Yes (GitHub Actions Runner ↔ LLM Provider API)
+
+4. **DF4: Fabric Agent Action Updates GitHub**
+ - **From**: Fabric Agent Action
+ - **To**: GitHub Platform
+ - **Description**: The action updates the GitHub repository (e.g., comments, statuses).
+ - **Crosses Trust Boundary**: Yes (GitHub Actions Runner ↔ GitHub Platform)
+
+5. **DF5: External Contributors Submit Inputs**
+ - **From**: External Contributors
+ - **To**: GitHub Platform
+ - **Description**: External users submit pull requests or issues that may trigger workflows.
+ - **Crosses Trust Boundary**: Yes (External Contributors ↔ GitHub Platform)
+
+## THREAT MODEL
+
+| THREAT ID | COMPONENT NAME | THREAT NAME | STRIDE CATEGORY | WHY APPLICABLE | HOW MITIGATED | MITIGATION | LIKELIHOOD EXPLANATION | IMPACT EXPLANATION | RISK SEVERITY |
+|-----------|------------------------|------------------------------------------------------------------------------------------------------------------------------|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|--------------|
+| 0001 | Fabric Agent Action | Unauthorized users triggering actions causing unexpected API usage and escalating costs | Elevation of Privilege | Without proper access control, unauthorized users could trigger the action, leading to increased costs and potential misuse | Access control patterns implemented using conditional statements in workflows (Security Control #1) | Enforce strict authorization checks in workflows to ensure only authorized users can trigger actions | Likely if workflows are not properly secured; external users may attempt to trigger actions | High financial costs due to excessive API usage; potential data leakage | High |
+| 0002 | Fabric Agent Action | Exposure of API keys through logs or environment variables | Information Disclosure | If the action logs sensitive information or improperly handles environment variables, API keys could be exposed | Environment variable management to prevent exposure in logs (`entrypoint.sh` and workflows) | Ensure that sensitive variables are not logged; scrub logs for sensitive data; use environment variable management best practices | Possible if logging is misconfigured or errors occur | Compromise of API keys; unauthorized access to LLM services; financial costs | High |
+| 0003 | Fabric Agent Action | Injection attacks through unvalidated user inputs leading to code execution | Tampering | User-provided inputs are used by the action; without validation, malicious inputs could alter execution flow or run arbitrary code | Input validation is recommended but not currently implemented | Implement robust input validation to sanitize and validate all inputs; use allowlists or schemas to prevent injection attacks | Possible if inputs are not properly validated; attackers could craft malicious inputs | Execution of unauthorized code; compromise of the action environment; further attacks | High |
+| 0004 | LLM Provider API | Interception of data sent to LLM provider resulting in information disclosure | Information Disclosure | Data sent to LLM provider could be intercepted if not properly encrypted | HTTPS used for all communications with LLM providers (Security Requirement under Cryptography) | Ensure TLS is enforced for all API communications; validate certificates; consider using mutual TLS if supported | Unlikely due to widespread use of HTTPS; possible if man-in-the-middle attacks occur | Exposure of sensitive data sent to LLM; potential data privacy violations | Medium |
+| 0005 | GitHub Actions Runner | Compromise of the runner environment leading to unauthorized access or data leakage | Elevation of Privilege | If the runner environment is compromised, attackers could access sensitive data or alter the action's execution | GitHub-hosted runners are isolated; GitHub manages security controls | Use self-hosted runners with hardened configurations; monitor runner security; limit access and permissions | Unlikely for GitHub-hosted runners; possible for self-hosted runners if misconfigured | Full compromise of action environment; data leakage; unauthorized code execution | High |
+| 0006 | GitHub Secrets | Unauthorized access to GitHub Secrets leading to exposure of API keys and sensitive information | Information Disclosure | Secrets stored in GitHub could be accessed by unauthorized users if permissions are misconfigured | Secure storage of API keys using GitHub Secrets (Security Control #2) | Regularly audit repository access permissions; restrict administrative privileges; monitor access logs | Possible if repository permissions are misconfigured or accounts are compromised | Exposure of API keys; financial costs; potential compromise of external services | High |
+| 0007 | Fabric Agent Action | Denial of Service attacks by overwhelming the action with inputs or triggering excessive workflows | Denial of Service | Attackers could submit numerous inputs or trigger workflows to exhaust resources or API quotas | Relies on GitHub's rate limiting and workflow conditions | Implement rate limiting at workflow level; add checks to limit the frequency of action triggers; use conditional executions | Possible if action is publicly accessible; attackers may attempt to exhaust resources | Service disruption; financial costs due to excessive API calls | Medium |
+| 0008 | LLM Provider API | Tampering of responses from LLM provider leading to incorrect or malicious outputs | Tampering | If responses from LLM provider are manipulated, the action could process incorrect data | Trust placed in external LLM providers; no specific mitigations | Validate responses from LLM provider; implement checksums or signatures if supported; use secure channels | Unlikely when using reputable providers; possible if network compromised or provider is malicious | Execution of incorrect actions; potential code injection or data corruption | Medium |
+| 0009 | GitHub Platform | Spoofing of user identities leading to unauthorized action executions | Spoofing | Attackers could spoof user identities and trigger actions with elevated privileges | Relies on GitHub authentication mechanisms (Security Requirement under Authentication) | Use GitHub's secure authentication methods; enforce multi-factor authentication; monitor for suspicious activities | Unlikely due to GitHub's strong authentication; possible if user accounts are compromised | Unauthorized actions being executed; potential data compromise | High |
+| 0010 | Fabric Agent Action | Lack of audit logging complicating detection of unauthorized or malicious activities | Repudiation | Without proper logging, it is difficult to trace actions and investigate incidents | Audit logging is recommended but not currently implemented (Recommended Security Control #3) | Implement detailed audit logging within the action; log key activities; ensure logs are securely stored and access controlled | Possible if logging is not implemented; attackers may exploit lack of logs to hide activities | Difficulty in incident response; undetected malicious activities | Medium |
+| 0011 | User Input Data | Information disclosure through unintended logging of sensitive user input data | Information Disclosure | Sensitive data provided by users could be logged inadvertently, leading to data exposure | Environment variable management to prevent exposure (Security Control #3) | Scrub sensitive data from logs; avoid logging user inputs that may contain confidential information | Possible if logs are not properly managed; developers may enable verbose logging | Exposure of proprietary or confidential information | Medium |
+| 0012 | Fabric Agent Action | Unauthorized modification of the action code leading to execution of malicious code | Tampering | If the action code is modified by unauthorized users, it could execute malicious code within workflows | Code is managed in GitHub with access controls; relies on repository security | Restrict repository write permissions; use code signing or integrity checks; monitor for unauthorized code changes | Possible if repository permissions are mismanaged | Execution of malicious code; compromise of workflows | High |
+| 0013 | GitHub Actions Runner | Excessive resource consumption leading to Denial of Service for other workflows | Denial of Service | The action could consume excessive CPU, memory, or network resources, impacting other workflows | GitHub-hosted runners have resource limits; no specific controls in action | Optimize action code for efficiency; set resource limits if possible; monitor resource usage | Possible during high usage periods or due to inefficient code | Workflow delays; impact on developer productivity | Low |
+| 0014 | External Contributors | Malicious inputs from external contributors leading to security breaches | Tampering | Contributors may submit pull requests or issues with malicious payloads that could trigger the action in unintended ways | Workflows can be configured to not run for external contributions; relies on maintainers' configurations | Restrict action triggers for external contributions; require manual approval before running workflows | Possible if workflows are misconfigured | Security breaches; execution of malicious code | High |
+| 0015 | LLM Provider API | Service unavailability impacting the action's execution | Denial of Service | If the LLM provider is unavailable, the action cannot function properly | No mitigation in action; dependency on external services | Implement retries with backoff; include fallbacks; monitor service status | Possible due to network issues or provider outages | Disruption of automated workflows; delays in development processes | Medium |
+
+## QUESTIONS & ASSUMPTIONS
+
+**Questions**:
+
+1. **Authentication Enhancements**: Are additional authentication mechanisms needed beyond GitHub's built-in controls to enhance security?
+2. **API Key Management**: How are API keys rotated and managed to minimize the risk of key compromise?
+3. **Support for Additional LLM Providers**: Are there plans to support additional LLM providers or self-hosted models for greater flexibility?
+4. **Compliance Requirements**: What are the compliance requirements concerning data processing with external LLM services?
+5. **Workflow Configuration Guidance**: How are users instructed to properly configure workflow access controls as outlined in the documentation?
+6. **Logging Practices**: Are there guidelines on what should and should not be logged to prevent unintentional information disclosure?
+7. **Input Validation Mechanisms**: What mechanisms are in place to ensure that user inputs are properly sanitized and validated?
+
+**Assumptions**:
+
+- Users will properly configure workflow access controls as per the provided documentation.
+- API keys are securely stored using GitHub Secrets and are not exposed in logs or outputs.
+- LLM providers comply with relevant data protection regulations (e.g., GDPR, CCPA).
+- The action will primarily run in GitHub-hosted runners unless explicitly configured otherwise.
+- GitHub provides sufficient logging and audit capabilities for monitoring action executions.
+- All communications with LLM providers are encrypted using HTTPS.
+- Users are aware of the need to restrict the action's execution to authorized personnel.