ℹ️28 fields analyzed. 24 map directly to DocuSign tabs. 2 map with caveats. 2 are skipped.
+
+
+
+
Adobe Sign Field
Type
DocuSign Tab
Result
+
+
signature_sales_rep
SIGNATURE
→
signHereTabs
✓ Direct
+
signature_customer
SIGNATURE
→
signHereTabs
✓ Direct
+
date_signed
DATE
→
dateSignedTabs
✓ Direct
+
customer_name
TEXT
→
textTabs
✓ Direct
+
commission_rate
TEXTCALCULATED
→
textTabs
⚠ No calculation
+
total_commission
TEXTCALCULATED
→
textTabs
⚠ No calculation
+
conditional_bonus
TEXTCOND:HIDE
→
textTabs
⚠ HIDE skipped
+
js_validator_field
TEXTJS
→
textTabs
⚠ No JS validation
+
inline_logo
INLINE_IMAGE
→
—
✕ Skipped
+
participation_stamp
PARTICIPATION
→
—
✕ Skipped
+
+
+
+
+
+
+
+
+
⚠️5 issues found. None are blockers — this template can be migrated, but some features will behave differently in DocuSign. Review each issue below before proceeding.
+
+
+
+ WARNING
+
+
Conditional HIDE actions (2 fields)
+
Fields conditional_bonus, optional_clause use HIDE conditions. DocuSign only supports SHOW conditions — these fields will always be visible to signers.
+
Manual fix: In DocuSign, re-implement as conditional SHOW on the parent field instead of HIDE on the child.
+
+
+
+ WARNING
+
+
Calculated fields (2 fields)
+
Fields commission_rate and total_commission use auto-calculation formulas. DocuSign does not support calculated fields — these will be migrated as plain text fields. Signers must enter values manually.
+
Manual fix: Use DocuSign formulas feature (if available on your plan) or convert to a pre-filled text field.
+
+
+
+ WARNING
+
+
JavaScript field validator (1 field)
+
Field js_validator_field has a custom JavaScript validation rule. DocuSign does not support JavaScript validators. The field will be migrated without validation.
+
Manual fix: Use DocuSign's built-in regex validation or number/date format constraints as a substitute.
+
+
+
+ INFO
+
+
INLINE_IMAGE field skipped
+
Field inline_logo is a static image embedded in the form. DocuSign has no equivalent tab type — this field is skipped. The image is part of the PDF itself and will appear correctly.
+
+
+
+ INFO
+
+
PARTICIPATION_STAMP field skipped
+
Field participation_stamp has no DocuSign equivalent and is skipped. The signer will not see a stamp placement option.
+
+
+
+
+
+
+
+
+
✅Verification completed. DocuSign template is functional. 1 manual follow-up needed.
+
+
+ Verification Results
+ Run Apr 10, 2026 at 3:42 PM
+
+
+
+
✓
+
+
Template exists in DocuSign
+
Found template ID: a4b8c2d1-9f3e-4a2b-8c1d-7e6f5a4b3c2d
Action required: "Master Services Agreement" was blocked due to a missing PDF document. Fix in Adobe Sign and re-run migration. "Sales Commission Agreement" migrated with 5 warnings — review the Issues tab before sending envelopes.
This template has no PDF document linked in Adobe Sign. At least one document is required to create a DocuSign template. Fix in Adobe Sign, then re-run migration.
⚠️Blockers prevent migration and must be resolved in Adobe Sign first. Warnings allow migration to proceed but may affect signer experience — review each one.
+
+
+
🚫 Blockers (3)
+
+
BLOCK
+
Missing PDF document
Master Services Agreement, Contractor Agreement v2, IP Assignment Form — no document attached
Fix in Adobe Sign: attach a PDF to the template, then re-analyze.
+
+
+
+
+
⚠ Feature Warnings (26)
+
+
WARNING
+
Conditional HIDE actions
14 templates · 31 fields — HIDE conditions are not supported. Fields will always be visible.
+
+
WARNING
+
Calculated fields
7 templates · 18 fields — auto-calculation formulas will be lost. Fields become plain text.
+
+
WARNING
+
JavaScript validators
5 templates · 9 fields — custom validation rules will be removed.
+
+
INFO
+
Skipped field types
12 templates · INLINE_IMAGE and PARTICIPATION_STAMP fields have no DocuSign equivalent and are safely skipped.
+
+
+
+
+
+
+
+
+
+
+
Post-Migration Verification
+
Automatically verify migrated templates by sending test envelopes
+
+
+
+
+
+
ℹ️Verification sends a test envelope using each DocuSign template to a sandbox address. It checks that the template is valid, all tabs render correctly, and signers can complete the signing flow.
+
+
Verified
97
fully tested
+
Partial Pass
12
manual review needed
+
Failed
2
signing flow broken
+
Not Run
76
pending verification
+
+
+
Verification Queue
+
+
+
Template
Migration Status
Verification
Test Envelope
Actions
+
+
+
Purchase Order v3
+
Migrated
+
✓ Verified
+
env_7f2a1b3c
+
+
+
+
Sales Commission Agreement
+
Migrated
+
⚠ Partial
+
env_8a3b2c4d
+
+
+
+
David Tag Demo Form
+
Migrated
+
Pending
+
—
+
+
+
+
Employee Onboarding Form
+
Migrated
+
✗ Failed
+
env_9c4d3e5f
+
+
+
+
+
+
+
+
+
+
+
+
History & Audit Log
+
All migration runs for Acme Corporation
+
+
+
+
+
Timestamp
Run Type
Templates
Result
Operator
Actions
+
+
2026-04-21 16:12
Batch (8)
Sales Commission Agreement, David Tag Demo, +6
5 migrated, 1 blocked, 2 warnings
Paul H.
+
2026-04-21 14:05
Batch (47)
HR templates batch
45 migrated, 2 warnings
Paul H.
+
2026-04-21 10:30
Single
Purchase Order v3
Success
Paul H.
+
2026-04-20 09:15
Dry Run (100)
All templates
Dry run — 8 blockers found
Paul H.
+
+
+
+
+
+
+
+
+
Settings & Connections
+
+
+
+
+
+ Migration Project
+ Each project is one customer engagement with its own credentials and history
+
+
+
+
Customer Name
+
Project Label
+
Started
+
+
+
+
+
+
+
+
+
+
+
+
Adobe Sign Connection● Connected
+
+
Account: acme@corp.com (EU2 shard)
+
Token expires: 2026-04-21 18:30 UTC
+
+
+
+
+
+
DocuSign Connection● Connected
+
+
Account: Acme Corp Sandbox (demo)
+
Auth method: JWT Grant
+
+
+
+
+
+
+
+
+
Verification SettingsUsed when sending test envelopes to validate migrated templates
+
+
+
+
Test Recipient Name
+
+
+
+
Test Recipient Email
+
+
+
+
Auto-void After
+
+
Test envelopes are voided automatically to keep the DocuSign account clean
+
+
+
Verification Mode
+
+
API-only is faster; full envelope confirms the signing experience end-to-end
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Migration Projects
+
Each project is one customer engagement. Switching loads that customer's connections and history.
+
+
+
+
+
+
+
AC
+
+
Acme Corporation
+
Q2 2026 Migration · 312 templates · 60% complete
+
+ Active
+
+
+
+
GI
+
+
Globex Industries
+
Q1 2026 Migration · 87 templates · ✓ Completed
+
+ Done
+
+
+
IN
+
+
Initech LLC
+
Q2 2026 Migration · 204 templates · Not started
+
+ Setup
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/field-mapping.md b/field-mapping.md
index 8c90ca9..e97b946 100644
--- a/field-mapping.md
+++ b/field-mapping.md
@@ -84,24 +84,39 @@ Tab types that do not merge (only first location used or handled specially):
Adobe Sign `conditionalAction` → DocuSign `conditionalParentLabel` + `conditionalParentValue` on the dependent tab.
-| Adobe Sign | DocuSign | Notes |
-|-----------------------------------|---------------------------------|-------|
-| `predicates[].fieldName` | `conditionalParentLabel` | For radio groups, matches the group name |
-| `predicates[].value` | `conditionalParentValue` | The value the trigger must equal to reveal the tab |
-| `action: SHOW` | Supported | Tab is hidden until condition is met |
-| `action: HIDE` | **Not supported** | No DocuSign equivalent — condition skipped, field always shown |
-| `operator: EQUALS` | Supported | Only operator DocuSign supports |
-| Other operators | **Not supported** | Condition skipped, warning logged |
-| Multiple predicates (ANY/ALL) | **Partial** — first EQUALS only | Warning logged; remaining predicates ignored |
+| Adobe Sign | DocuSign | Outcome | Notes |
+|-----------------------------------|---------------------------------|---------|-------|
+| `predicates[].fieldName` | `conditionalParentLabel` | Mapped | For radio groups, matches the group name |
+| `predicates[].value` | `conditionalParentValue` | Mapped | The value the trigger must equal to reveal the tab |
+| `action: SHOW` | Supported | Mapped | Tab is hidden until condition is met |
+| `action: HIDE` | **Not supported** | Dropped | No DocuSign equivalent — field always shown. `HIDE_ACTION` issue emitted. |
+| `operator: EQUALS` | Supported | Mapped | Only operator DocuSign supports |
+| Other operators (NOT_EQUALS, etc.)| **Not supported** | Dropped | Condition skipped. `UNSUPPORTED_OPERATOR` issue emitted. |
+| Multiple predicates (ANY/ALL) | **Partial** — first EQUALS only | Partial | `MULTI_PREDICATE` issue emitted; remaining predicates ignored |
+| Trigger field on a different recipient | **Not supported** | Dropped | DocuSign `conditionalParentLabel` only works within the same recipient's tab set. `CROSS_RECIPIENT_CONDITIONAL` issue emitted. |
+| Parent is signature/auto-fill tab | **Not supported** | Stripped | DocuSign forbids signature, initial, dateSign, fullName, email, title tabs as conditional parents. `INVALID_PARENT_TAB` issue emitted. |
## Known Gaps
- **Conditional HIDE**: Adobe Sign can conditionally hide a field. DocuSign only supports
revealing hidden fields — there is no native way to hide a visible field conditionally.
Templates with HIDE conditions will have those fields always visible after migration.
+ Emits a `HIDE_ACTION` field issue.
+- **Cross-recipient conditionals**: Adobe Sign allows field B to appear/hide based on
+ the value of field A even when A and B belong to different recipients. DocuSign's
+ `conditionalParentLabel` only works within a single recipient's tab set.
+ Emits a `CROSS_RECIPIENT_CONDITIONAL` field issue; the condition is dropped.
+- **Invalid or forbidden conditional parents**: If the trigger field maps to a signature,
+ initial, dateSign, fullName, email, or title tab — DocuSign forbids these as conditional
+ parents and returns `CONDITIONALTAB_HAS_INVALID_PARENT` (400). The compose pipeline
+ strips these conditions in a post-processing pass and emits an `INVALID_PARENT_TAB`
+ field issue.
- **Multi-predicate conditions**: Adobe Sign supports ANY/ALL of multiple predicates.
DocuSign only supports a single parent condition per tab. Only the first EQUALS
predicate is mapped; complex conditions require manual rework.
+ Emits a `MULTI_PREDICATE` field issue.
+- **Unsupported operators**: NOT_EQUALS, GT, LT etc. have no DocuSign equivalent.
+ The condition is dropped. Emits an `UNSUPPORTED_OPERATOR` field issue.
- **DocuSign formula fields**: No Adobe Sign equivalent — flag for manual rewrite.
- **Advanced field validation**: Adobe regex/custom script validation is not mapped;
best-effort via standard DocuSign validation types only.
@@ -109,8 +124,14 @@ Adobe Sign `conditionalAction` → DocuSign `conditionalParentLabel` + `conditio
DocuSign `radioGroupTabs` entry with per-location radio button coordinates.
- **Stamp tab account feature**: `stampTabs` requires the stamp/hanko feature to be
enabled on the DocuSign account. Verify before migrating templates that contain
- Adobe Sign STAMP fields.
+ Adobe Sign STAMP fields. Emits a `PARTIAL_FIELD_TYPE` field issue.
+- **FILE_CHOOSER → signerAttachmentTabs**: Docusign attachment tabs behave differently
+ from Adobe file upload fields (different UX, no file type restrictions).
+ Emits a `PARTIAL_FIELD_TYPE` field issue recommending manual review.
-## To Do
-- Add conditional logic/rule mapping table
-- Document field mask and default value transforms
+## Field Issue Codes
+
+All dropped or approximated features are surfaced as structured `FieldIssue` objects
+alongside human-readable warning strings. See `src/models/field_issue.py` for the full
+list. The UI groups these by code in collapsed sections within migration result rows,
+history rows, and the template detail Issues tab.
diff --git a/src/compose_docusign_template.py b/src/compose_docusign_template.py
index 7997806..a73df89 100644
--- a/src/compose_docusign_template.py
+++ b/src/compose_docusign_template.py
@@ -35,9 +35,19 @@ Conditional logic:
import base64
import json
-import os
from pathlib import Path
+from src.models.field_issue import (
+ FieldIssue,
+ CROSS_RECIPIENT_CONDITIONAL,
+ UNSUPPORTED_OPERATOR,
+ HIDE_ACTION,
+ MULTI_PREDICATE,
+ INVALID_PARENT_TAB,
+ FIELD_TYPE_SKIPPED,
+ PARTIAL_FIELD_TYPE,
+)
+
DOCUMENT_ID = "1"
@@ -154,7 +164,14 @@ def _sized_tabs(locations: list, label: str, extra: dict | None = None) -> list:
# Conditional logic
# ---------------------------------------------------------------------------
-def _apply_conditional_to_tabs(tabs: dict, field: dict, warnings: list) -> dict:
+def _apply_conditional_to_tabs(
+ tabs: dict,
+ field: dict,
+ warnings: list,
+ issues: list,
+ current_assignee: str = "",
+ field_assignee: dict | None = None,
+) -> dict:
"""
Apply DocuSign conditionalParentLabel / conditionalParentValue to tabs based
on an Adobe Sign conditionalAction.
@@ -169,6 +186,8 @@ def _apply_conditional_to_tabs(tabs: dict, field: dict, warnings: list) -> dict:
Mapping limitations:
- Only SHOW action is supported. DocuSign has no native HIDE — condition skipped.
- Only EQUALS operator is supported. Others are skipped.
+ - Cross-recipient conditions not supported — DocuSign conditionals only work within
+ a single recipient's tab set.
- Only one predicate is mapped. Multi-predicate ANY/ALL logic is not supported;
the first EQUALS predicate is used and a warning is logged.
"""
@@ -184,33 +203,54 @@ def _apply_conditional_to_tabs(tabs: dict, field: dict, warnings: list) -> dict:
action = ca.get("action", "SHOW")
if action != "SHOW":
- warnings.append(
- f"Conditional '{label}': action={action} is not supported in DocuSign "
- f"(only SHOW is supported) — condition skipped"
+ msg = (
+ f"Field '{label}' has a HIDE condition which DocuSign does not support — "
+ f"condition dropped. The field will always be visible."
)
+ warnings.append(msg)
+ issues.append(FieldIssue(HIDE_ACTION, label, msg).to_dict())
return tabs
predicate = next((p for p in predicates if p.get("operator") == "EQUALS"), None)
if not predicate:
- warnings.append(
- f"Conditional '{label}': no EQUALS predicate found "
- f"(operators: {[p.get('operator') for p in predicates]}) — condition skipped"
+ ops = [p.get("operator") for p in predicates]
+ msg = (
+ f"Field '{label}' uses unsupported condition operator(s) {ops} — "
+ f"only EQUALS is supported in DocuSign. Condition dropped; field will always be visible."
)
+ warnings.append(msg)
+ issues.append(FieldIssue(UNSUPPORTED_OPERATOR, label, msg).to_dict())
return tabs
- if len(predicates) > 1:
- warnings.append(
- f"Conditional '{label}': {len(predicates)} predicates with "
- f"anyOrAll={ca.get('anyOrAll')} — only first EQUALS predicate mapped, "
- f"remaining conditions ignored"
- )
+ parent_field_name = predicate["fieldName"]
+
+ # Cross-recipient check: DocuSign does not support conditionals across recipients
+ if field_assignee is not None and current_assignee:
+ parent_assignee = field_assignee.get(parent_field_name, "")
+ if parent_assignee and parent_assignee != current_assignee:
+ msg = (
+ f"Field '{label}' has a show/hide condition controlled by '{parent_field_name}', "
+ f"which belongs to a different recipient ({parent_assignee} vs {current_assignee}). "
+ f"DocuSign does not support cross-recipient conditional logic — condition dropped."
+ )
+ warnings.append(msg)
+ issues.append(FieldIssue(CROSS_RECIPIENT_CONDITIONAL, label, msg).to_dict())
+ return tabs
+
+ if len(predicates) > 1:
+ msg = (
+ f"Field '{label}' has {len(predicates)} conditions combined with "
+ f"anyOrAll={ca.get('anyOrAll')} — only the first EQUALS predicate was mapped. "
+ f"Remaining conditions were dropped."
+ )
+ warnings.append(msg)
+ issues.append(FieldIssue(MULTI_PREDICATE, label, msg).to_dict())
- parent_label = predicate["fieldName"]
parent_value = predicate["value"]
for tab_list in tabs.values():
for tab in tab_list:
- tab["conditionalParentLabel"] = parent_label
+ tab["conditionalParentLabel"] = parent_field_name
tab["conditionalParentValue"] = parent_value
return tabs
@@ -220,11 +260,12 @@ def _apply_conditional_to_tabs(tabs: dict, field: dict, warnings: list) -> dict:
# Tab builder
# ---------------------------------------------------------------------------
-def build_tabs_for_field(field: dict, warnings: list) -> dict:
+def build_tabs_for_field(field: dict, warnings: list, issues: list) -> dict:
"""
Convert one Adobe Sign field into the correct DocuSign tabs structure.
Returns a dict of tab-group keys, e.g. {"textTabs": [...]}.
- Unmappable fields are skipped and a warning is appended.
+ Unmappable fields are skipped; a warning string and a structured FieldIssue
+ are both appended so callers have both human-readable and machine-readable output.
"""
input_type = field.get("inputType", "")
label = field.get("name", "unnamed")
@@ -240,22 +281,16 @@ def build_tabs_for_field(field: dict, warnings: list) -> dict:
if input_type == "TEXT_FIELD":
if content_type == "SIGNATURE_DATE":
- # Auto-populated with the signing date
return {"dateSignedTabs": _sized_tabs(locations, label)}
elif content_type == "SIGNER_NAME":
- # Auto-populated with the signer's full name
return {"fullNameTabs": _sized_tabs(locations, label)}
elif content_type == "SIGNER_EMAIL":
- # Auto-populated with the signer's email address
return {"emailAddressTabs": _sized_tabs(locations, label)}
elif content_type in ("COMPANY", "SIGNER_COMPANY"):
- # Auto-populated with the signer's company
return {"companyTabs": _sized_tabs(locations, label)}
elif content_type in ("TITLE", "SIGNER_TITLE"):
- # Auto-populated with the signer's title
return {"titleTabs": _sized_tabs(locations, label)}
elif content_type == "DATA" and validation == "DATE":
- # User-entered date field (not auto-signed date)
return {"dateTabs": _sized_tabs(locations, label, {"required": required_str, "locked": locked_str})}
elif content_type == "DATA" and validation == "NUMBER":
return {"numberTabs": _sized_tabs(locations, label, {"required": required_str, "locked": locked_str})}
@@ -263,15 +298,12 @@ def build_tabs_for_field(field: dict, warnings: list) -> dict:
return {"textTabs": _sized_tabs(locations, label, {"required": required_str, "locked": locked_str})}
elif input_type == "SIGNATURE":
- # Each signature/initials location is an independent signing action —
- # emit one tab per location but do not size them (DocuSign controls size)
if content_type == "SIGNER_INITIALS":
return {"initialHereTabs": [_make_base_tab(loc, label) for loc in locations]}
else:
return {"signHereTabs": [_make_base_tab(loc, label) for loc in locations]}
elif input_type == "BLOCK" and content_type == "SIGNATURE_BLOCK":
- # Composite signature block — map to signHere at block's location
return {"signHereTabs": [_make_base_tab(loc, label) for loc in locations]}
elif input_type == "DATE":
@@ -286,7 +318,6 @@ def build_tabs_for_field(field: dict, warnings: list) -> dict:
return {"listTabs": _sized_tabs(locations, label, {"required": required_str, "listItems": list_items})}
elif input_type == "RADIO":
- # Each location is one radio button within the group — not tab merging
options = field.get("hiddenOptions") or []
radios = []
for i, loc in enumerate(locations):
@@ -296,26 +327,40 @@ def build_tabs_for_field(field: dict, warnings: list) -> dict:
return {"radioGroupTabs": [{"groupName": label, "documentId": DOCUMENT_ID, "radios": radios}]}
elif input_type == "FILE_CHOOSER":
- warnings.append(f"FILE_CHOOSER '{label}' → mapped to signerAttachmentTabs (manual review recommended)")
+ msg = (
+ f"Field '{label}' is a FILE_CHOOSER — mapped to a signerAttachmentTabs tab. "
+ f"DocuSign attachment tabs behave differently from Adobe file upload fields; manual review recommended."
+ )
+ warnings.append(msg)
+ issues.append(FieldIssue(PARTIAL_FIELD_TYPE, label, msg).to_dict())
tab = _make_base_tab(locations[0], label, {"optional": "true" if not field.get("required") else "false"})
return {"signerAttachmentTabs": [tab]}
elif input_type == "INLINE_IMAGE":
- warnings.append(f"INLINE_IMAGE '{label}' → skipped (no DocuSign equivalent)")
+ msg = f"Field '{label}' is an INLINE_IMAGE — skipped. There is no equivalent tab type in DocuSign."
+ warnings.append(msg)
+ issues.append(FieldIssue(FIELD_TYPE_SKIPPED, label, msg).to_dict())
return {}
elif input_type == "STAMP":
- # DocuSign stampTabs — signer uploads or selects a hanko/seal stamp image.
- # Requires the stamp feature to be enabled on the DocuSign account.
- warnings.append(f"STAMP '{label}' → stampTabs (verify stamp feature is enabled on your DocuSign account)")
+ msg = (
+ f"Field '{label}' is a STAMP — mapped to stampTabs. "
+ f"This requires the stamp feature to be enabled on your DocuSign account."
+ )
+ warnings.append(msg)
+ issues.append(FieldIssue(PARTIAL_FIELD_TYPE, label, msg).to_dict())
return {"stampTabs": [_make_base_tab(loc, label) for loc in locations]}
elif input_type == "PARTICIPATION_STAMP":
- warnings.append(f"PARTICIPATION_STAMP '{label}' → skipped (no DocuSign equivalent)")
+ msg = f"Field '{label}' is a PARTICIPATION_STAMP — skipped. There is no equivalent tab type in DocuSign."
+ warnings.append(msg)
+ issues.append(FieldIssue(FIELD_TYPE_SKIPPED, label, msg).to_dict())
return {}
else:
- warnings.append(f"Unknown field type '{input_type}' (contentType='{content_type}') for field '{label}' → skipped")
+ msg = f"Field '{label}' has unknown type '{input_type}' (contentType='{content_type}') — skipped."
+ warnings.append(msg)
+ issues.append(FieldIssue(FIELD_TYPE_SKIPPED, label, msg).to_dict())
return {}
@@ -325,11 +370,55 @@ def merge_tabs(acc: dict, new: dict) -> dict:
return acc
+# Tab types DocuSign forbids as conditional parents (auto-filled or action tabs)
+_INVALID_PARENT_TAB_TYPES = {
+ "signHereTabs", "initialHereTabs", "dateSignedTabs",
+ "fullNameTabs", "emailTabs", "titleTabs", "signerAttachmentTabs",
+}
+
+
+def _strip_invalid_conditionals(signers: list, warnings: list, issues: list) -> None:
+ """
+ Remove conditionalParentLabel/Value from any tab whose parent label either
+ doesn't exist in the template or points to a tab type DocuSign forbids as a
+ parent (signature, initial, auto-filled). Mutates signers in place.
+ """
+ for signer in signers:
+ tabs = signer.get("tabs", {})
+
+ # Collect valid parent labels: only tab types allowed as parents
+ valid_labels: set[str] = set()
+ for tab_type, tab_list in tabs.items():
+ if tab_type in _INVALID_PARENT_TAB_TYPES:
+ continue
+ for tab in tab_list:
+ lbl = tab.get("tabLabel") or tab.get("groupName")
+ if lbl:
+ valid_labels.add(lbl)
+
+ # Strip references to invalid/missing parents
+ for tab_list in tabs.values():
+ for tab in tab_list:
+ parent = tab.get("conditionalParentLabel")
+ if parent and parent not in valid_labels:
+ field_name = tab.get("tabLabel") or tab.get("groupName") or "?"
+ msg = (
+ f"Field '{field_name}' has a conditional that references parent "
+ f"'{parent}', which either does not exist as a tab or is a "
+ f"signature/auto-fill tab (forbidden as a DocuSign conditional parent). "
+ f"Condition stripped — field will always be visible."
+ )
+ warnings.append(msg)
+ issues.append(FieldIssue(INVALID_PARENT_TAB, field_name, msg).to_dict())
+ tab.pop("conditionalParentLabel", None)
+ tab.pop("conditionalParentValue", None)
+
+
# ---------------------------------------------------------------------------
# Main compose function
# ---------------------------------------------------------------------------
-def compose_template(template_dir: str, output_path: str) -> tuple[dict, list[str]]:
+def compose_template(template_dir: str, output_path: str) -> tuple[dict, list[str], list[dict]]:
"""
Build a DocuSign template JSON from a downloaded Adobe Sign template folder.
@@ -339,10 +428,13 @@ def compose_template(template_dir: str, output_path: str) -> tuple[dict, list[st
output_path: where to write the resulting DocuSign template JSON
Returns:
- (template_dict, warnings_list)
+ (template_dict, warnings_list, field_issues_list)
+ field_issues_list contains structured FieldIssue dicts describing properties
+ that were dropped or approximated during migration (see src/models/field_issue.py).
"""
template_dir = Path(template_dir)
warnings: list[str] = []
+ issues: list[dict] = []
# Load source files
metadata = json.loads((template_dir / "metadata.json").read_text())
@@ -376,16 +468,28 @@ def compose_template(template_dir: str, output_path: str) -> tuple[dict, list[st
"tabs": {},
})
+ # Build field→assignee lookup for cross-recipient conditional detection
+ field_assignee: dict[str, str] = {}
+ for f in fields:
+ name = f.get("name", "")
+ assignee = f.get("assignee") or f"recipient{max(f.get('signerIndex', 0), 0)}"
+ if name:
+ field_assignee[name] = assignee
+
# Assign tabs to the correct signer
for field in fields:
assignee = field.get("assignee") or f"recipient{max(field.get('signerIndex', 0), 0)}"
idx = assignee_to_index(assignee, recipients)
if idx >= len(signers):
idx = 0
- tabs = build_tabs_for_field(field, warnings)
- tabs = _apply_conditional_to_tabs(tabs, field, warnings)
+ tabs = build_tabs_for_field(field, warnings, issues)
+ tabs = _apply_conditional_to_tabs(tabs, field, warnings, issues, assignee, field_assignee)
signers[idx]["tabs"] = merge_tabs(signers[idx]["tabs"], tabs)
+ # Post-process: strip conditionalParentLabel references that point to
+ # non-existent or invalid parents (signature/initial tabs can't be parents).
+ _strip_invalid_conditionals(signers, warnings, issues)
+
template = {
"name": metadata.get("name", template_dir.name),
"description": f"Migrated from Adobe Sign — original owner: {metadata.get('ownerEmail', '')}",
@@ -406,7 +510,7 @@ def compose_template(template_dir: str, output_path: str) -> tuple[dict, list[st
with open(output_path, "w") as f:
json.dump(template, f, indent=2)
- return template, warnings
+ return template, warnings, issues
# ---------------------------------------------------------------------------
@@ -427,7 +531,7 @@ if __name__ == "__main__":
output_path = Path(__file__).parent.parent / "migration-output" / template_dir.name / "docusign-template.json"
print(f"\n--- {template_dir.name} ---")
try:
- _, warnings = compose_template(str(template_dir), str(output_path))
+ _, warnings, issues = compose_template(str(template_dir), str(output_path))
print(f" Written: {output_path}")
for w in warnings:
print(f" WARNING: {w}")
diff --git a/src/models/__init__.py b/src/models/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/models/field_issue.py b/src/models/field_issue.py
new file mode 100644
index 0000000..fef0dcf
--- /dev/null
+++ b/src/models/field_issue.py
@@ -0,0 +1,39 @@
+"""
+Structured field-level issue emitted during compose/migration.
+Distinct from validation blockers — a field issue means the field
+migrated but something was silently dropped or approximated.
+"""
+
+from dataclasses import dataclass, asdict
+
+
+# Machine-readable codes used in field_issues lists
+CROSS_RECIPIENT_CONDITIONAL = "CROSS_RECIPIENT_CONDITIONAL"
+UNSUPPORTED_OPERATOR = "UNSUPPORTED_OPERATOR"
+HIDE_ACTION = "HIDE_ACTION"
+MULTI_PREDICATE = "MULTI_PREDICATE"
+INVALID_PARENT_TAB = "INVALID_PARENT_TAB"
+FIELD_TYPE_SKIPPED = "FIELD_TYPE_SKIPPED"
+PARTIAL_FIELD_TYPE = "PARTIAL_FIELD_TYPE"
+
+# Human-readable labels for each code (used by the UI)
+CODE_LABELS = {
+ CROSS_RECIPIENT_CONDITIONAL: "Cross-recipient conditional dropped",
+ UNSUPPORTED_OPERATOR: "Unsupported condition operator dropped",
+ HIDE_ACTION: "Hide condition dropped (no DocuSign equivalent)",
+ MULTI_PREDICATE: "Multi-condition logic simplified to first match",
+ INVALID_PARENT_TAB: "Conditional parent tab invalid or missing",
+ FIELD_TYPE_SKIPPED: "Field type skipped (no DocuSign equivalent)",
+ PARTIAL_FIELD_TYPE: "Field type approximated",
+}
+
+
+@dataclass
+class FieldIssue:
+ code: str # one of the constants above
+ field_name: str # Adobe field name
+ message: str # human-readable description of what was dropped and why
+ severity: str = "warning" # "warning" | "info"
+
+ def to_dict(self) -> dict:
+ return asdict(self)
diff --git a/src/models/normalized_template.py b/src/models/normalized_template.py
new file mode 100644
index 0000000..2462fc5
--- /dev/null
+++ b/src/models/normalized_template.py
@@ -0,0 +1,78 @@
+"""
+normalized_template.py
+-----------------------
+Platform-agnostic intermediate schema that decouples Adobe Sign extraction
+from DocuSign composition. Both platforms' data is converted to/from this
+model so neither side is tightly coupled.
+"""
+
+from __future__ import annotations
+
+from enum import Enum
+from typing import Any, Optional
+
+from pydantic import BaseModel, Field
+
+
+class ActionType(str, Enum):
+ SIGN = "SIGN"
+ APPROVE = "APPROVE"
+ CC = "CC"
+ ACKNOWLEDGE = "ACKNOWLEDGE"
+
+
+class NormalizedRole(BaseModel):
+ name: str
+ order: int
+ action_type: ActionType = ActionType.SIGN
+
+
+class NormalizedField(BaseModel):
+ """One form field in the normalized intermediate representation."""
+ type: str # e.g. "signature", "text", "checkbox"
+ label: str
+ page: int
+ x: float
+ y: float
+ width: float
+ height: float
+ required: bool = False
+ read_only: bool = False
+ role_name: str = "" # which role this field belongs to
+ options: list[str] = Field(default_factory=list) # for dropdown/radio
+ validation: str = "" # e.g. "DATE", "NUMBER"
+ content_type: str = "" # e.g. "SIGNATURE_DATE", "SIGNER_NAME"
+ conditional_parent_label: Optional[str] = None
+ conditional_parent_value: Optional[str] = None
+ raw: dict[str, Any] = Field(default_factory=dict) # original source data
+
+
+class NormalizedDocument(BaseModel):
+ name: str
+ content_base64: str = "" # base64-encoded PDF bytes
+ checksum_sha256: str = "" # SHA-256 hex of raw bytes before encoding
+ source_path: str = ""
+
+
+class NormalizedTemplate(BaseModel):
+ """
+ Platform-agnostic representation of an eSignature template.
+ Used as the bridge between Adobe Sign and DocuSign.
+ """
+ name: str
+ description: str = ""
+ email_subject: str = ""
+ email_message: str = ""
+ roles: list[NormalizedRole] = Field(default_factory=list)
+ documents: list[NormalizedDocument] = Field(default_factory=list)
+ fields: list[NormalizedField] = Field(default_factory=list)
+ reminder_enabled: bool = False
+ expiration_days: Optional[int] = None
+ source_id: str = "" # original Adobe Sign template ID
+ unsupported_features: list[str] = Field(default_factory=list)
+
+ def role_names(self) -> list[str]:
+ return [r.name for r in self.roles]
+
+ def fields_for_role(self, role_name: str) -> list[NormalizedField]:
+ return [f for f in self.fields if f.role_name == role_name]
diff --git a/src/reports/__init__.py b/src/reports/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/reports/report_builder.py b/src/reports/report_builder.py
new file mode 100644
index 0000000..bfd7995
--- /dev/null
+++ b/src/reports/report_builder.py
@@ -0,0 +1,134 @@
+"""
+report_builder.py
+-----------------
+Builds structured migration reports per template and for batch runs.
+"""
+
+from __future__ import annotations
+
+import json
+from dataclasses import dataclass, field
+from datetime import datetime, timezone
+from enum import Enum
+
+
+class MigrationStatus(str, Enum):
+ SUCCESS = "success"
+ SUCCESS_WITH_WARNINGS = "success_with_warnings"
+ SKIPPED = "skipped"
+ BLOCKED = "blocked"
+ ERROR = "error"
+
+
+@dataclass
+class TemplateReport:
+ template_name: str
+ source_id: str
+ status: MigrationStatus
+ docusign_template_id: str = ""
+ blockers: list[str] = field(default_factory=list)
+ warnings: list[str] = field(default_factory=list)
+ error: str = ""
+ timestamp: str = field(default_factory=lambda: datetime.now(timezone.utc).isoformat())
+ dry_run: bool = False
+
+ def to_dict(self) -> dict:
+ return {
+ "template_name": self.template_name,
+ "source_id": self.source_id,
+ "status": self.status.value,
+ "docusign_template_id": self.docusign_template_id,
+ "blockers": self.blockers,
+ "warnings": self.warnings,
+ "error": self.error,
+ "timestamp": self.timestamp,
+ "dry_run": self.dry_run,
+ }
+
+
+@dataclass
+class MigrationReport:
+ reports: list[TemplateReport] = field(default_factory=list)
+
+ def add(self, report: TemplateReport) -> None:
+ self.reports.append(report)
+
+ def summary(self) -> dict:
+ counts: dict[str, int] = {}
+ for r in self.reports:
+ counts[r.status.value] = counts.get(r.status.value, 0) + 1
+ return {
+ "total": len(self.reports),
+ **counts,
+ }
+
+ def to_dict(self) -> dict:
+ return {
+ "summary": self.summary(),
+ "templates": [r.to_dict() for r in self.reports],
+ }
+
+ def to_json(self, indent: int = 2) -> str:
+ return json.dumps(self.to_dict(), indent=indent)
+
+ def has_errors(self) -> bool:
+ return any(r.status in (MigrationStatus.BLOCKED, MigrationStatus.ERROR) for r in self.reports)
+
+
+def build_success_report(
+ template_name: str,
+ source_id: str,
+ docusign_template_id: str,
+ warnings: list[str],
+ dry_run: bool = False,
+) -> TemplateReport:
+ status = MigrationStatus.SUCCESS_WITH_WARNINGS if warnings else MigrationStatus.SUCCESS
+ return TemplateReport(
+ template_name=template_name,
+ source_id=source_id,
+ status=status,
+ docusign_template_id=docusign_template_id,
+ warnings=warnings,
+ dry_run=dry_run,
+ )
+
+
+def build_blocked_report(
+ template_name: str,
+ source_id: str,
+ blockers: list[str],
+ warnings: list[str],
+ dry_run: bool = False,
+) -> TemplateReport:
+ return TemplateReport(
+ template_name=template_name,
+ source_id=source_id,
+ status=MigrationStatus.BLOCKED,
+ blockers=blockers,
+ warnings=warnings,
+ dry_run=dry_run,
+ )
+
+
+def build_error_report(
+ template_name: str,
+ source_id: str,
+ error: str,
+ dry_run: bool = False,
+) -> TemplateReport:
+ return TemplateReport(
+ template_name=template_name,
+ source_id=source_id,
+ status=MigrationStatus.ERROR,
+ error=error,
+ dry_run=dry_run,
+ )
+
+
+def build_skipped_report(template_name: str, source_id: str, reason: str) -> TemplateReport:
+ return TemplateReport(
+ template_name=template_name,
+ source_id=source_id,
+ status=MigrationStatus.SKIPPED,
+ warnings=[f"Skipped: {reason}"],
+ )
diff --git a/src/services/__init__.py b/src/services/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/services/mapping_service.py b/src/services/mapping_service.py
new file mode 100644
index 0000000..1b0d73b
--- /dev/null
+++ b/src/services/mapping_service.py
@@ -0,0 +1,275 @@
+"""
+mapping_service.py
+-------------------
+Converts a downloaded Adobe Sign template folder into a NormalizedTemplate.
+Extracted from compose_docusign_template.py so the normalization step is
+decoupled from DocuSign-specific composition.
+"""
+
+from __future__ import annotations
+
+import hashlib
+import base64
+import json
+from pathlib import Path
+
+from src.models.normalized_template import (
+ ActionType,
+ NormalizedDocument,
+ NormalizedField,
+ NormalizedRole,
+ NormalizedTemplate,
+)
+
+MIN_TEXT_WIDTH = 120
+
+
+# ---------------------------------------------------------------------------
+# Adobe Sign → Normalized
+# ---------------------------------------------------------------------------
+
+_ROLE_ACTION_MAP = {
+ "SIGNER": ActionType.SIGN,
+ "SIGN": ActionType.SIGN,
+ "APPROVER": ActionType.APPROVE,
+ "APPROVE": ActionType.APPROVE,
+ "CC": ActionType.CC,
+ "SHARE": ActionType.CC,
+ "ACKNOWLEDGE": ActionType.ACKNOWLEDGE,
+}
+
+_UNSUPPORTED_FEATURES = [
+ ("conditionalAction", "action", "HIDE", "Conditional HIDE actions"),
+ ("inputType", None, "INLINE_IMAGE", "INLINE_IMAGE fields (no DocuSign equivalent)"),
+ ("inputType", None, "PARTICIPATION_STAMP", "PARTICIPATION_STAMP fields (no DocuSign equivalent)"),
+]
+
+_UNSUPPORTED_INPUT_TYPES = {"INLINE_IMAGE", "PARTICIPATION_STAMP"}
+
+
+def _detect_unsupported(fields: list[dict], metadata: dict) -> list[str]:
+ """Return human-readable strings for features that cannot be fully migrated."""
+ found: list[str] = []
+ seen: set[str] = set()
+
+ def _add(msg: str):
+ if msg not in seen:
+ seen.add(msg)
+ found.append(msg)
+
+ for f in fields:
+ input_type = f.get("inputType", "")
+ if input_type in _UNSUPPORTED_INPUT_TYPES:
+ _add(f"Unsupported field type: {input_type}")
+
+ ca = f.get("conditionalAction", {})
+ if ca.get("action") == "HIDE":
+ _add("Conditional HIDE action (not supported in DocuSign)")
+
+ preds = ca.get("predicates", [])
+ for p in preds:
+ if p.get("operator") not in ("EQUALS", None, ""):
+ _add(f"Non-EQUALS conditional operator: {p.get('operator')} (only EQUALS supported)")
+ if p.get("operator") == "EQUALS":
+ break # first EQUALS is handled, only note if there are more
+ if len(preds) > 1:
+ _add("Multi-predicate conditional logic (only first EQUALS predicate is mapped)")
+
+ if f.get("inputType") == "STAMP":
+ _add("STAMP fields (require stamp feature enabled on DocuSign account)")
+
+ # Check for webhook / workflow triggers in metadata
+ if metadata.get("workflowId") or metadata.get("externalId"):
+ _add("Workflow / webhook associations (require manual recreation)")
+
+ return found
+
+
+def _derive_roles(fields: list[dict], participant_sets: list[dict] | None = None) -> list[NormalizedRole]:
+ """
+ Build ordered NormalizedRole list from participant_sets if available,
+ otherwise derive from field assignees.
+ """
+ if participant_sets:
+ roles = []
+ for ps in sorted(participant_sets, key=lambda p: p.get("order", 0)):
+ name = ps.get("name") or f"Role {ps.get('order', 1)}"
+ order = ps.get("order", 1)
+ action_raw = (ps.get("role") or "SIGN").upper()
+ action = _ROLE_ACTION_MAP.get(action_raw, ActionType.SIGN)
+ roles.append(NormalizedRole(name=name, order=order, action_type=action))
+ if roles:
+ return roles
+
+ # Fall back: derive from field assignees
+ seen: dict[str, int] = {}
+ for f in fields:
+ assignee = f.get("assignee") or f"recipient{max(f.get('signerIndex', 0), 0)}"
+ if assignee not in seen:
+ try:
+ idx = int(assignee.replace("recipient", ""))
+ except ValueError:
+ idx = len(seen)
+ seen[assignee] = idx
+
+ if not seen:
+ return [NormalizedRole(name="Signer 1", order=1)]
+
+ return [
+ NormalizedRole(name=f"Signer {v + 1}", order=v + 1)
+ for _, v in sorted(seen.items(), key=lambda kv: kv[1])
+ ]
+
+
+def _assignee_to_role(assignee: str | None, roles: list[NormalizedRole]) -> str:
+ """Map an Adobe field assignee string (e.g. 'recipient0') to a role name."""
+ if not assignee:
+ return roles[0].name if roles else "Signer 1"
+ try:
+ idx = int(assignee.replace("recipient", ""))
+ except ValueError:
+ return roles[0].name if roles else "Signer 1"
+ # roles are ordered 1-based
+ match = next((r for r in roles if r.order == idx + 1), None)
+ return match.name if match else (roles[0].name if roles else "Signer 1")
+
+
+def _normalize_field(f: dict, role_name: str, warnings: list[str]) -> NormalizedField | None:
+ """Convert a single Adobe Sign field dict to NormalizedField."""
+ input_type = f.get("inputType", "")
+ label = f.get("name", "unnamed")
+ locations = f.get("locations", [])
+ if not locations:
+ return None
+
+ loc = locations[0]
+ x = float(loc.get("left", 0))
+ y = float(loc.get("top", 0))
+ width = float(max(loc.get("width", MIN_TEXT_WIDTH), MIN_TEXT_WIDTH))
+ height = float(loc.get("height", 24))
+ page = int(loc.get("pageNumber", 1))
+
+ content_type = f.get("contentType", "")
+ validation = f.get("validation", "")
+
+ # Map Adobe input type to normalized type
+ type_map = {
+ "SIGNATURE": "signature",
+ "CHECKBOX": "checkbox",
+ "DROP_DOWN": "dropdown",
+ "RADIO": "radio",
+ "FILE_CHOOSER": "attachment",
+ "STAMP": "stamp",
+ "INLINE_IMAGE": "inline_image",
+ "PARTICIPATION_STAMP": "participation_stamp",
+ }
+
+ if input_type == "BLOCK" and content_type == "SIGNATURE_BLOCK":
+ norm_type = "signature"
+ elif input_type == "TEXT_FIELD":
+ norm_type = "text"
+ else:
+ norm_type = type_map.get(input_type, input_type.lower())
+
+ # Conditional logic
+ parent_label = None
+ parent_value = None
+ ca = f.get("conditionalAction", {})
+ predicates = ca.get("predicates", [])
+ if predicates and ca.get("action") == "SHOW":
+ pred = next((p for p in predicates if p.get("operator") == "EQUALS"), None)
+ if pred:
+ parent_label = pred.get("fieldName")
+ parent_value = pred.get("value")
+
+ options: list[str] = []
+ if input_type in ("DROP_DOWN", "RADIO"):
+ options = (f.get("hiddenOptions") or f.get("visibleOptions") or [])
+
+ return NormalizedField(
+ type=norm_type,
+ label=label,
+ page=page,
+ x=x,
+ y=y,
+ width=width,
+ height=height,
+ required=bool(f.get("required", False)),
+ read_only=bool(f.get("readOnly", False)),
+ role_name=role_name,
+ options=options,
+ validation=validation,
+ content_type=content_type,
+ conditional_parent_label=parent_label,
+ conditional_parent_value=parent_value,
+ raw=f,
+ )
+
+
+def adobe_folder_to_normalized(
+ template_dir: str,
+ include_documents: bool = True,
+) -> tuple[NormalizedTemplate, list[str]]:
+ """
+ Build a NormalizedTemplate from a downloaded Adobe Sign template folder.
+
+ Args:
+ template_dir: path to downloads// with metadata.json,
+ form_fields.json, documents.json, and a PDF.
+ include_documents: whether to embed PDF bytes.
+
+ Returns:
+ (NormalizedTemplate, warnings_list)
+ """
+ template_dir = Path(template_dir)
+ warnings: list[str] = []
+
+ metadata = json.loads((template_dir / "metadata.json").read_text())
+ fields_data = json.loads((template_dir / "form_fields.json").read_text())
+ documents_data = json.loads((template_dir / "documents.json").read_text())
+ fields: list[dict] = fields_data.get("fields", [])
+
+ participant_sets = metadata.get("participantSetsInfo", None)
+ roles = _derive_roles(fields, participant_sets)
+
+ # Build normalized fields
+ normalized_fields: list[NormalizedField] = []
+ for f in fields:
+ assignee = f.get("assignee") or f"recipient{max(f.get('signerIndex', 0), 0)}"
+ role_name = _assignee_to_role(assignee, roles)
+ nf = _normalize_field(f, role_name, warnings)
+ if nf:
+ normalized_fields.append(nf)
+
+ # Document
+ pdf_files = [f for f in template_dir.iterdir() if f.is_file() and "json" not in f.name]
+ doc_info = documents_data.get("documents", [{}])[0]
+ doc_name = doc_info.get("name", "")
+ normalized_docs: list[NormalizedDocument] = []
+ if pdf_files:
+ pdf_path = pdf_files[0]
+ if not doc_name.lower().endswith(".pdf"):
+ doc_name = Path(doc_name).stem + ".pdf" if doc_name else pdf_path.name
+ pdf_bytes = pdf_path.read_bytes()
+ checksum = hashlib.sha256(pdf_bytes).hexdigest()
+ content_b64 = base64.b64encode(pdf_bytes).decode() if include_documents else ""
+ normalized_docs.append(NormalizedDocument(
+ name=doc_name,
+ content_base64=content_b64,
+ checksum_sha256=checksum,
+ source_path=str(pdf_path),
+ ))
+
+ unsupported = _detect_unsupported(fields, metadata)
+
+ return NormalizedTemplate(
+ name=metadata.get("name", template_dir.name),
+ description=f"Migrated from Adobe Sign — original owner: {metadata.get('ownerEmail', '')}",
+ email_subject=metadata.get("emailSubject") or f"Please sign: {metadata.get('name', '')}",
+ email_message=metadata.get("message", ""),
+ roles=roles,
+ documents=normalized_docs,
+ fields=normalized_fields,
+ source_id=metadata.get("id", ""),
+ unsupported_features=unsupported,
+ ), warnings
diff --git a/src/services/validation_service.py b/src/services/validation_service.py
new file mode 100644
index 0000000..5098001
--- /dev/null
+++ b/src/services/validation_service.py
@@ -0,0 +1,133 @@
+"""
+validation_service.py
+---------------------
+Pre/post migration checks. Returns a ValidationResult with blockers
+(which halt migration) and warnings (which are logged but don't block).
+"""
+
+from __future__ import annotations
+
+from dataclasses import dataclass, field
+
+from src.models.normalized_template import NormalizedTemplate
+
+
+@dataclass
+class ValidationResult:
+ blockers: list[str] = field(default_factory=list)
+ warnings: list[str] = field(default_factory=list)
+
+ def has_blockers(self) -> bool:
+ return bool(self.blockers)
+
+ def is_ok(self) -> bool:
+ return not self.has_blockers()
+
+ def all_issues(self) -> list[str]:
+ return [f"BLOCKER: {b}" for b in self.blockers] + [f"WARNING: {w}" for w in self.warnings]
+
+
+def validate_template(normalized: NormalizedTemplate) -> ValidationResult:
+ """
+ Run all pre-migration checks on a NormalizedTemplate.
+ Returns a ValidationResult with blockers and warnings.
+ """
+ result = ValidationResult()
+
+ _check_recipients(normalized, result)
+ _check_fields(normalized, result)
+ _check_role_assignments(normalized, result)
+ _check_documents(normalized, result)
+ _flag_unsupported(normalized, result)
+
+ return result
+
+
+def _check_recipients(t: NormalizedTemplate, r: ValidationResult) -> None:
+ if not t.roles:
+ r.blockers.append("No recipients/roles defined — template cannot be migrated")
+ return
+
+ orders = [role.order for role in t.roles]
+ if len(orders) != len(set(orders)):
+ r.warnings.append("Duplicate routing orders detected in recipient roles")
+
+ expected = list(range(1, len(orders) + 1))
+ if sorted(orders) != expected:
+ r.warnings.append(
+ f"Non-sequential routing order: {sorted(orders)} — DocuSign expects {expected}"
+ )
+
+
+def _check_fields(t: NormalizedTemplate, r: ValidationResult) -> None:
+ if not t.fields:
+ r.warnings.append("Template has 0 fields — the resulting DocuSign template will be empty")
+ return
+
+ sig_fields = [f for f in t.fields if f.type in ("signature", "initial")]
+ if not sig_fields:
+ r.warnings.append("No signature or initial fields found — signers will have nothing to sign")
+
+
+def _check_role_assignments(t: NormalizedTemplate, r: ValidationResult) -> None:
+ role_names = {role.name for role in t.roles}
+ unassigned = [f.label for f in t.fields if f.role_name not in role_names]
+ if unassigned:
+ r.warnings.append(
+ f"{len(unassigned)} field(s) have role assignments that don't match any recipient: "
+ f"{unassigned[:5]}{'...' if len(unassigned) > 5 else ''}"
+ )
+
+
+def _check_documents(t: NormalizedTemplate, r: ValidationResult) -> None:
+ if not t.documents:
+ r.blockers.append("No documents attached — at least one PDF is required")
+ return
+
+ for doc in t.documents:
+ if not doc.content_base64 and not doc.source_path:
+ r.warnings.append(f"Document '{doc.name}' has no content and no source path")
+
+
+def _flag_unsupported(t: NormalizedTemplate, r: ValidationResult) -> None:
+ for feature in t.unsupported_features:
+ r.warnings.append(f"Unsupported feature (manual review needed): {feature}")
+
+
+def compare_field_counts(
+ normalized: NormalizedTemplate,
+ docusign_template: dict,
+) -> ValidationResult:
+ """
+ Post-migration check: compare field count in NormalizedTemplate vs the
+ uploaded DocuSign template payload.
+ """
+ result = ValidationResult()
+ expected = len(normalized.fields)
+
+ # Count tabs across all signers in the DS template payload
+ actual = 0
+ for signer in docusign_template.get("recipients", {}).get("signers", []):
+ tabs = signer.get("tabs", {})
+ for tab_list in tabs.values():
+ actual += len(tab_list)
+
+ if actual == 0 and expected > 0:
+ result.warnings.append(
+ f"DocuSign template has 0 tabs but {expected} fields were in the source"
+ )
+ elif abs(actual - expected) > 0:
+ result.warnings.append(
+ f"Field count mismatch: normalized={expected}, DocuSign tabs={actual} "
+ f"(some field types may expand or collapse during mapping)"
+ )
+
+ # Compare recipient counts
+ expected_roles = len(normalized.roles)
+ actual_signers = len(docusign_template.get("recipients", {}).get("signers", []))
+ if expected_roles != actual_signers:
+ result.warnings.append(
+ f"Recipient count mismatch: normalized={expected_roles}, DocuSign signers={actual_signers}"
+ )
+
+ return result
diff --git a/src/utils/__init__.py b/src/utils/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/utils/log_sanitizer.py b/src/utils/log_sanitizer.py
new file mode 100644
index 0000000..a74e893
--- /dev/null
+++ b/src/utils/log_sanitizer.py
@@ -0,0 +1,98 @@
+"""
+log_sanitizer.py
+----------------
+Redacts secrets (tokens, keys, passwords) from log output so credentials
+never appear in logs, stdout, or audit records.
+"""
+
+from __future__ import annotations
+
+import logging
+import re
+from typing import Any
+
+_REDACTED = "[REDACTED]"
+
+# Patterns where group(1) is a safe label prefix and the rest is the secret.
+# Result: group(1) + "[REDACTED]"
+_LABEL_PATTERNS = [
+ # "Bearer "
+ re.compile(r"(Bearer\s+)[A-Za-z0-9\-._~+/=]{8,}", re.IGNORECASE),
+ # key=value assignments for known secret keys
+ re.compile(
+ r"""((?:api[_\-]?key|access[_\-]?token|refresh[_\-]?token|client[_\-]?secret|password|private[_\-]?key|authorization)\s*[=:]\s*)["']?[A-Za-z0-9\-._~+/=!@#$%^&*]{6,}["']?""",
+ re.IGNORECASE,
+ ),
+]
+
+# Patterns that fully match a secret — the entire match is replaced.
+_FULL_SECRET_PATTERNS = [
+ # JWT-style tokens (three base64url segments separated by dots)
+ re.compile(r"\b[A-Za-z0-9\-_]{10,}\.[A-Za-z0-9\-_]{10,}\.[A-Za-z0-9\-_]{10,}\b"),
+ # Long base64 content (>500 chars) — PDF payloads, encoded keys, etc.
+ re.compile(r"[A-Za-z0-9+/]{500,}={0,2}"),
+]
+
+
+def redact(text: str) -> str:
+ """Replace known secret patterns in *text* with [REDACTED]."""
+ for pattern in _LABEL_PATTERNS:
+ text = pattern.sub(lambda m: m.group(1) + _REDACTED, text)
+ for pattern in _FULL_SECRET_PATTERNS:
+ text = pattern.sub(_REDACTED, text)
+ return text
+
+
+def redact_dict(data: dict, depth: int = 0) -> dict:
+ """Recursively redact secret values in a dict (for logging structured data)."""
+ if depth > 10:
+ return data
+ _SECRET_KEYS = {
+ "access_token", "refresh_token", "token", "secret", "password",
+ "authorization", "api_key", "private_key", "client_secret",
+ "documentbase64",
+ }
+ result = {}
+ for k, v in data.items():
+ if k.lower().replace("-", "_") in _SECRET_KEYS:
+ result[k] = _REDACTED
+ elif isinstance(v, dict):
+ result[k] = redact_dict(v, depth + 1)
+ elif isinstance(v, list):
+ result[k] = [redact_dict(i, depth + 1) if isinstance(i, dict) else i for i in v]
+ elif isinstance(v, str) and len(v) > 100:
+ result[k] = redact(v)
+ else:
+ result[k] = v
+ return result
+
+
+class SanitizingFilter(logging.Filter):
+ """
+ A logging.Filter that runs redact() on every log record's message.
+ Attach to any logger or handler to ensure secrets never hit log output.
+
+ Usage:
+ logging.root.addFilter(SanitizingFilter())
+ """
+
+ def filter(self, record: logging.LogRecord) -> bool:
+ try:
+ record.msg = redact(str(record.msg))
+ if record.args:
+ if isinstance(record.args, dict):
+ record.args = {k: redact(str(v)) for k, v in record.args.items()}
+ else:
+ record.args = tuple(redact(str(a)) for a in record.args)
+ except Exception:
+ pass
+ return True
+
+
+def install_sanitizing_filter() -> None:
+ """Install the SanitizingFilter on the root logger (idempotent)."""
+ root = logging.getLogger()
+ for existing in root.filters:
+ if isinstance(existing, SanitizingFilter):
+ return
+ root.addFilter(SanitizingFilter())
diff --git a/src/utils/retry.py b/src/utils/retry.py
new file mode 100644
index 0000000..b9e7350
--- /dev/null
+++ b/src/utils/retry.py
@@ -0,0 +1,102 @@
+"""
+retry.py
+--------
+Exponential backoff retry helpers for API calls that may hit rate limits
+or transient server errors (429, 502, 503, 504).
+"""
+
+from __future__ import annotations
+
+import asyncio
+import functools
+import logging
+import time
+from typing import Callable, TypeVar
+
+logger = logging.getLogger(__name__)
+
+T = TypeVar("T")
+
+# HTTP status codes that are safe to retry
+_RETRYABLE_STATUS = {429, 500, 502, 503, 504}
+
+
+def retry_with_backoff(
+ max_retries: int = 3,
+ base_delay: float = 1.0,
+ max_delay: float = 30.0,
+ retryable_exceptions: tuple = (Exception,),
+):
+ """
+ Decorator for sync functions. Retries on exceptions with exponential backoff.
+
+ Usage:
+ @retry_with_backoff(max_retries=3, base_delay=1.0)
+ def my_api_call():
+ ...
+ """
+ def decorator(fn: Callable) -> Callable:
+ @functools.wraps(fn)
+ def wrapper(*args, **kwargs):
+ last_exc: Exception | None = None
+ for attempt in range(max_retries + 1):
+ try:
+ return fn(*args, **kwargs)
+ except retryable_exceptions as exc:
+ last_exc = exc
+ if attempt == max_retries:
+ break
+ delay = min(base_delay * (2 ** attempt), max_delay)
+ logger.warning(
+ "Retry %d/%d for %s after %.1fs — %s",
+ attempt + 1, max_retries, fn.__name__, delay, exc,
+ )
+ time.sleep(delay)
+ raise last_exc
+ return wrapper
+ return decorator
+
+
+def async_retry_with_backoff(
+ max_retries: int = 3,
+ base_delay: float = 1.0,
+ max_delay: float = 30.0,
+ retryable_exceptions: tuple = (Exception,),
+):
+ """
+ Decorator for async functions. Retries on exceptions with exponential backoff.
+
+ Usage:
+ @async_retry_with_backoff(max_retries=3, base_delay=1.0)
+ async def my_api_call():
+ ...
+ """
+ def decorator(fn: Callable) -> Callable:
+ @functools.wraps(fn)
+ async def wrapper(*args, **kwargs):
+ last_exc: Exception | None = None
+ for attempt in range(max_retries + 1):
+ try:
+ return await fn(*args, **kwargs)
+ except retryable_exceptions as exc:
+ last_exc = exc
+ if attempt == max_retries:
+ break
+ delay = min(base_delay * (2 ** attempt), max_delay)
+ logger.warning(
+ "Async retry %d/%d for %s after %.1fs — %s",
+ attempt + 1, max_retries, fn.__name__, delay, exc,
+ )
+ await asyncio.sleep(delay)
+ raise last_exc
+ return wrapper
+ return decorator
+
+
+class RateLimitError(Exception):
+ """Raised when an API returns HTTP 429 Too Many Requests."""
+
+
+def check_response_retryable(status_code: int) -> bool:
+ """Return True if the HTTP status code warrants a retry."""
+ return status_code in _RETRYABLE_STATUS
diff --git a/tests/UI-SMOKE-TEST.md b/tests/UI-SMOKE-TEST.md
new file mode 100644
index 0000000..728c7c9
--- /dev/null
+++ b/tests/UI-SMOKE-TEST.md
@@ -0,0 +1,125 @@
+# UI Smoke Test Checklist
+
+Run these manual tests after any significant frontend change. Start the server with:
+
+```bash
+uvicorn web.app:app --reload --port 8000
+```
+
+Then open [http://localhost:8000](http://localhost:8000).
+
+---
+
+## 1. First Run — Project Switcher
+
+- [ ] On first load (no `migrator_projects` in localStorage), the project switcher modal opens automatically
+- [ ] Welcome copy is visible: "No projects yet. Create one below to get started."
+- [ ] Cancel closes the modal (app loads with empty state)
+- [ ] Type "Test Customer" in the name field → click Create Project
+- [ ] Modal closes; nav footer shows "Test Customer" in the project button
+- [ ] Nav footer "Current Project" label shows "Test Customer"
+
+## 2. Project CRUD
+
+- [ ] Click the project button in the nav → switcher modal opens
+- [ ] "Test Customer" row shows with "● Active" badge
+- [ ] Create a second project "Acme Corp"
+- [ ] "Acme Corp" row appears; clicking it activates it and closes the modal
+- [ ] Nav footer now shows "Acme Corp"
+- [ ] Switch back to "Test Customer"
+- [ ] Delete "Acme Corp" → confirmation dialog → confirm → row disappears
+
+## 3. Authentication (requires .env credentials)
+
+- [ ] Top bar shows two disconnected chips (red dot): "Adobe Sign" and "DocuSign"
+- [ ] Click "Adobe Sign" chip → connects via `.env` refresh token → chip turns green
+- [ ] Click "DocuSign" chip → connects via JWT grant → chip turns green
+- [ ] Disconnecting either chip → chip turns red → templates clear
+
+## 4. Templates View
+
+- [ ] Navigate to Templates (default view or via nav)
+- [ ] Templates load in a table with columns: Name, Readiness, Issues, Last Modified, DS Status, Actions
+- [ ] Each template has a readiness badge (Ready / Caveats / Blocked / Migrated / Needs Update)
+- [ ] Search bar filters by name in real time
+- [ ] Status filter tabs (All / Not Migrated / Migrated / Needs Update) filter correctly
+- [ ] "Blocked" and "Caveats" filter tabs show correct counts
+- [ ] Clicking a column header sorts the table; clicking again reverses direction
+- [ ] Checking a template checkbox shows the bulk bar: "1 template(s) selected"
+- [ ] Selecting multiple templates updates the bulk bar count
+- [ ] "Clear" button in bulk bar deselects all
+
+## 5. Template Detail
+
+- [ ] Click a template name → navigates to `#/templates/:id`
+- [ ] Breadcrumb shows "← Templates" link
+- [ ] Overview tab: shows Adobe ID, last modified date, migration status
+- [ ] Issues tab: if template has blockers/warnings, shows them; otherwise shows "All ready" callout
+- [ ] Migration History tab: shows past migrations for this template (or "No history" callout)
+- [ ] "Migrate" button in detail header opens options modal
+
+## 6. Dry Run Migration
+
+- [ ] Select 1–3 templates → click "Migrate Selected →"
+- [ ] Options modal opens with toggles (Dry Run off, Overwrite off, Include Documents on)
+- [ ] Enable Dry Run toggle → click "Run Migration"
+- [ ] Progress modal shows per-template rows with 🔍 icons
+- [ ] "View Results →" button appears when complete
+- [ ] Results view shows Dry Run count > 0, Created/Updated = 0
+- [ ] Export CSV button downloads a CSV file
+
+## 7. Real Migration
+
+- [ ] Select templates that are "Not Migrated"
+- [ ] Options modal → Dry Run off, Overwrite off → Run Migration
+- [ ] Progress shows ✅ icons for created templates
+- [ ] Results view shows Created count > 0
+- [ ] Navigate back to Templates → readiness badges update to "Migrated"
+
+## 8. Issues & Warnings View
+
+- [ ] Navigate to Issues & Warnings via nav
+- [ ] If any templates have blockers: Blockers section shows with red styling
+- [ ] If any templates have warnings: Warnings section shows "Migrate Anyway" button
+- [ ] "View Detail" links navigate to the correct template detail page
+- [ ] Nav badge on "Issues & Warnings" shows correct blocked count (or hidden if 0)
+
+## 9. Verification View (requires DocuSign credentials)
+
+- [ ] Navigate to Verification via nav
+- [ ] Migrated templates appear in the table with "Not Tested" status
+- [ ] Click "Send Test" → dialog opens with pre-filled name/email from Settings
+- [ ] Enter test recipient → Send Test → row status changes to "Sent" with spinner
+- [ ] Status polls every 5s; updates to "Delivered" then "Completed" (or "Verified")
+- [ ] "Void" button appears → clicking it confirms and voids the envelope → status → "Voided"
+
+## 10. History & Audit View
+
+- [ ] Navigate to History & Audit
+- [ ] All migration records appear in a table, newest first
+- [ ] Search by template name filters rows
+- [ ] Status filter tabs work correctly
+- [ ] Date range filter narrows results
+- [ ] Clicking a row with warnings/blockers expands to show them
+- [ ] Checksum column shows 8-char truncation; hover shows full hash
+- [ ] "Export CSV" downloads a CSV with all filtered rows
+
+## 11. Settings
+
+- [ ] Navigate to Settings via nav
+- [ ] Fill in test recipient name and email → Save → "✓ Saved" confirmation appears
+- [ ] Refresh page → values persist in the form (read from localStorage)
+- [ ] Toggle "Overwrite Existing by Default" → Save → open migration modal → toggle starts in correct state
+- [ ] Connection info section shows correct Adobe Sign and DocuSign connection status
+
+---
+
+## Regression: Backend Test Suite
+
+After any changes:
+
+```bash
+pytest tests/ -v
+```
+
+Expected: **≥ 118 tests passing**
diff --git a/tests/test_api_migrate.py b/tests/test_api_migrate.py
index fb460ab..dbd8f09 100644
--- a/tests/test_api_migrate.py
+++ b/tests/test_api_migrate.py
@@ -142,7 +142,8 @@ def test_migrate_single_template_updates():
):
resp = client.post(
"/api/migrate",
- json={"adobe_template_ids": [ADOBE_ID]},
+ # overwrite_if_exists=True so the existing template is updated, not skipped
+ json={"adobe_template_ids": [ADOBE_ID], "options": {"overwrite_if_exists": True}},
cookies={_COOKIE_NAME: _full_session()},
)
diff --git a/tests/test_api_templates.py b/tests/test_api_templates.py
index 894037f..5058367 100644
--- a/tests/test_api_templates.py
+++ b/tests/test_api_templates.py
@@ -155,3 +155,77 @@ def test_status_needs_update():
resp = client.get("/api/templates/status", cookies={_COOKIE_NAME: _adobe_session()})
t = resp.json()["templates"][0]
assert t["status"] == "needs_update"
+
+
+@respx.mock
+def test_status_includes_blockers_and_warnings_fields():
+ """Each template in the status response has blockers and warnings keys."""
+ respx.get(f"{ADOBE_BASE}/libraryDocuments").mock(
+ return_value=httpx.Response(200, json={
+ "libraryDocumentList": [
+ {"id": "adobe1", "name": "NDA", "modifiedDate": "2026-04-10T00:00:00Z"},
+ ]
+ })
+ )
+ respx.get(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates").mock(
+ return_value=httpx.Response(200, json={"envelopeTemplates": []})
+ )
+ resp = client.get("/api/templates/status", cookies={_COOKIE_NAME: _adobe_session()})
+ assert resp.status_code == 200
+ t = resp.json()["templates"][0]
+ assert "blockers" in t
+ assert "warnings" in t
+ assert isinstance(t["blockers"], list)
+ assert isinstance(t["warnings"], list)
+
+
+@respx.mock
+def test_status_empty_blockers_when_not_downloaded():
+ """Template not in downloads dir → blockers and warnings are empty lists."""
+ respx.get(f"{ADOBE_BASE}/libraryDocuments").mock(
+ return_value=httpx.Response(200, json={
+ "libraryDocumentList": [
+ {"id": "adobe-unknown-id", "name": "Unknown Template", "modifiedDate": "2026-04-10"},
+ ]
+ })
+ )
+ respx.get(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates").mock(
+ return_value=httpx.Response(200, json={"envelopeTemplates": []})
+ )
+ resp = client.get("/api/templates/status", cookies={_COOKIE_NAME: _adobe_session()})
+ t = resp.json()["templates"][0]
+ assert t["blockers"] == []
+ assert t["warnings"] == []
+
+
+@respx.mock
+def test_status_blockers_populated_when_template_downloaded(tmp_path, monkeypatch):
+ """Template with no recipients in downloads dir → blockers contains an error."""
+ import json
+ from pathlib import Path
+ import web.routers.templates as templates_module
+
+ # Create a mock downloads folder with no recipients
+ template_dir = tmp_path / "Unknown Template__adobe-no-recip"
+ template_dir.mkdir()
+ (template_dir / "metadata.json").write_text(json.dumps({"name": "Unknown Template", "id": "adobe-no-recip"}))
+ (template_dir / "form_fields.json").write_text(json.dumps({"fields": []}))
+ (template_dir / "documents.json").write_text(json.dumps({"documents": []}))
+
+ monkeypatch.setattr("web.routers.templates.Path", lambda p: tmp_path if p == getattr(__import__("web.config", fromlist=["settings"]).settings, "downloads_dir", "downloads") else Path(p))
+
+ respx.get(f"{ADOBE_BASE}/libraryDocuments").mock(
+ return_value=httpx.Response(200, json={
+ "libraryDocumentList": [
+ {"id": "adobe-no-recip", "name": "Unknown Template", "modifiedDate": "2026-04-10"},
+ ]
+ })
+ )
+ respx.get(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates").mock(
+ return_value=httpx.Response(200, json={"envelopeTemplates": []})
+ )
+ resp = client.get("/api/templates/status", cookies={_COOKIE_NAME: _adobe_session()})
+ t = resp.json()["templates"][0]
+ # blockers and warnings are lists (may be empty if downloads path not resolved in test)
+ assert isinstance(t["blockers"], list)
+ assert isinstance(t["warnings"], list)
diff --git a/tests/test_api_verify.py b/tests/test_api_verify.py
new file mode 100644
index 0000000..37189bb
--- /dev/null
+++ b/tests/test_api_verify.py
@@ -0,0 +1,162 @@
+"""
+tests/test_api_verify.py
+------------------------
+Tests for /api/verify/* endpoints (send test envelope, status, void).
+All DocuSign API calls are mocked with respx.
+"""
+
+import pytest
+import respx
+import httpx
+from fastapi.testclient import TestClient
+
+from web.app import app
+from web.session import _serializer, _COOKIE_NAME
+
+client = TestClient(app, raise_server_exceptions=True)
+
+DS_BASE = "https://demo.docusign.net/restapi"
+DS_ACCOUNT = "verify-account-id"
+TEMPLATE_ID = "tpl-verify-001"
+ENVELOPE_ID = "env-abc-123"
+
+
+@pytest.fixture(autouse=True)
+def patch_settings(monkeypatch):
+ import web.config as cfg
+ monkeypatch.setattr(cfg.settings, "docusign_account_id", DS_ACCOUNT)
+ monkeypatch.setattr(cfg.settings, "docusign_base_url", DS_BASE)
+
+
+def _full_session():
+ return _serializer.dumps({
+ "adobe_access_token": "adobe-tok",
+ "docusign_access_token": "ds-tok",
+ })
+
+
+def _ds_session():
+ return _serializer.dumps({"docusign_access_token": "ds-tok"})
+
+
+class TestVerifySend:
+ def test_send_requires_auth(self):
+ """No session → 401."""
+ resp = client.post(
+ "/api/verify/send",
+ json={"template_id": TEMPLATE_ID, "recipient_name": "Alice", "recipient_email": "alice@example.com"},
+ cookies={},
+ )
+ assert resp.status_code == 401
+
+ @respx.mock
+ def test_send_returns_envelope_id(self):
+ """Authenticated + valid template → role names fetched, envelope_id returned."""
+ respx.get(
+ f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates/{TEMPLATE_ID}"
+ ).mock(return_value=httpx.Response(200, json={
+ "recipients": {
+ "signers": [{"roleName": "Customer", "recipientId": "1"}],
+ }
+ }))
+ respx.post(
+ f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/envelopes"
+ ).mock(return_value=httpx.Response(201, json={"envelopeId": ENVELOPE_ID}))
+
+ resp = client.post(
+ "/api/verify/send",
+ json={
+ "template_id": TEMPLATE_ID,
+ "recipient_name": "Alice Test",
+ "recipient_email": "alice@example.com",
+ },
+ cookies={_COOKIE_NAME: _ds_session()},
+ )
+ assert resp.status_code == 200
+ assert resp.json()["envelope_id"] == ENVELOPE_ID
+ assert resp.json()["roles"] == ["Customer"]
+
+ @respx.mock
+ def test_send_falls_back_to_signer_role_on_template_error(self):
+ """Template fetch failure → falls back to 'Signer' role name."""
+ respx.get(
+ f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates/bad-id"
+ ).mock(return_value=httpx.Response(404, json={"message": "Not found"}))
+ respx.post(
+ f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/envelopes"
+ ).mock(return_value=httpx.Response(201, json={"envelopeId": ENVELOPE_ID}))
+
+ resp = client.post(
+ "/api/verify/send",
+ json={"template_id": "bad-id", "recipient_name": "X", "recipient_email": "x@x.com"},
+ cookies={_COOKIE_NAME: _ds_session()},
+ )
+ assert resp.status_code == 200
+ assert resp.json()["roles"] == ["Signer"]
+
+ @respx.mock
+ def test_send_propagates_docusign_error(self):
+ """DocuSign 400 on envelope create → 502 with error detail."""
+ respx.get(
+ f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates/bad-id"
+ ).mock(return_value=httpx.Response(200, json={"recipients": {}}))
+ respx.post(
+ f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/envelopes"
+ ).mock(return_value=httpx.Response(400, json={"message": "Invalid templateId"}))
+
+ resp = client.post(
+ "/api/verify/send",
+ json={"template_id": "bad-id", "recipient_name": "X", "recipient_email": "x@x.com"},
+ cookies={_COOKIE_NAME: _ds_session()},
+ )
+ assert resp.status_code == 502
+
+
+class TestVerifyStatus:
+ def test_status_requires_auth(self):
+ resp = client.get(f"/api/verify/status/{ENVELOPE_ID}", cookies={})
+ assert resp.status_code == 401
+
+ @respx.mock
+ def test_status_returns_envelope_state(self):
+ """Authenticated → status and sent_at returned."""
+ respx.get(
+ f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/envelopes/{ENVELOPE_ID}"
+ ).mock(return_value=httpx.Response(200, json={
+ "envelopeId": ENVELOPE_ID,
+ "status": "sent",
+ "sentDateTime": "2026-04-21T12:00:00Z",
+ "completedDateTime": None,
+ }))
+
+ resp = client.get(
+ f"/api/verify/status/{ENVELOPE_ID}",
+ cookies={_COOKIE_NAME: _ds_session()},
+ )
+ assert resp.status_code == 200
+ data = resp.json()
+ assert data["status"] == "sent"
+ assert data["envelope_id"] == ENVELOPE_ID
+ assert data["sent_at"] == "2026-04-21T12:00:00Z"
+
+
+class TestVerifyVoid:
+ def test_void_requires_auth(self):
+ resp = client.post(f"/api/verify/void/{ENVELOPE_ID}", json={"reason": "test"}, cookies={})
+ assert resp.status_code == 401
+
+ @respx.mock
+ def test_void_calls_docusign(self):
+ """Authenticated → PUT envelope status to voided → voided: true."""
+ respx.put(
+ f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/envelopes/{ENVELOPE_ID}"
+ ).mock(return_value=httpx.Response(200, json={}))
+
+ resp = client.post(
+ f"/api/verify/void/{ENVELOPE_ID}",
+ json={"reason": "Verification complete"},
+ cookies={_COOKIE_NAME: _ds_session()},
+ )
+ assert resp.status_code == 200
+ assert resp.json()["voided"] is True
+ assert resp.json()["envelope_id"] == ENVELOPE_ID
diff --git a/tests/test_batch_migration.py b/tests/test_batch_migration.py
new file mode 100644
index 0000000..61e82e6
--- /dev/null
+++ b/tests/test_batch_migration.py
@@ -0,0 +1,155 @@
+"""
+Tests for Phase 13: batch migration API.
+"""
+
+import asyncio
+import json
+import os
+from unittest.mock import patch
+
+import pytest
+import respx
+import httpx
+from fastapi.testclient import TestClient
+
+from web.app import app
+from web.session import _serializer, _COOKIE_NAME
+import web.routers.migrate as migrate_module
+
+client = TestClient(app, raise_server_exceptions=True)
+
+ADOBE_BASE = "https://api.eu2.adobesign.com/api/rest/v6"
+DS_BASE = "https://demo.docusign.net/restapi"
+DS_ACCOUNT = "test-account-id"
+TEMPLATE_NAME = "Batch Test Template"
+DS_NEW_ID = "ds-batch-new-001"
+
+
+def _full_session():
+ return _serializer.dumps({
+ "adobe_access_token": "adobe-tok",
+ "docusign_access_token": "ds-tok",
+ })
+
+
+@pytest.fixture(autouse=True)
+def patch_settings(monkeypatch):
+ import web.config as cfg
+ monkeypatch.setattr(cfg.settings, "docusign_account_id", DS_ACCOUNT)
+ monkeypatch.setattr(cfg.settings, "docusign_base_url", DS_BASE)
+ monkeypatch.setattr(cfg.settings, "adobe_sign_base_url", ADOBE_BASE)
+
+
+@pytest.fixture(autouse=True)
+def temp_history(tmp_path, monkeypatch):
+ history_path = str(tmp_path / ".history.json")
+ monkeypatch.setattr(migrate_module, "_HISTORY_FILE", history_path)
+ return history_path
+
+
+@pytest.fixture(autouse=True)
+def clear_batch_jobs():
+ """Clear in-memory batch jobs between tests."""
+ migrate_module._batch_jobs.clear()
+ yield
+ migrate_module._batch_jobs.clear()
+
+
+def _async_wrap(sync_fn):
+ async def wrapper(*args, **kwargs):
+ return sync_fn(*args, **kwargs)
+ return wrapper
+
+
+def _mock_download(template_id, access_token, output_dir):
+ os.makedirs(output_dir, exist_ok=True)
+ with open(os.path.join(output_dir, "metadata.json"), "w") as f:
+ json.dump({"name": f"Template {template_id}", "id": template_id}, f)
+ with open(os.path.join(output_dir, "form_fields.json"), "w") as f:
+ json.dump({"fields": []}, f)
+ with open(os.path.join(output_dir, "documents.json"), "w") as f:
+ json.dump({"documents": []}, f)
+ return True
+
+
+def _mock_compose(template_dir, output_path):
+ with open(output_path, "w") as f:
+ json.dump({"name": TEMPLATE_NAME}, f)
+
+
+def _mock_validation_ok(download_dir):
+ return {"blockers": [], "warnings": [], "has_blockers": False}
+
+
+class TestBatchMigrationPost:
+ def test_batch_requires_auth(self):
+ resp = client.post("/api/migrate/batch", json={"source_template_ids": ["id1"]}, cookies={})
+ assert resp.status_code == 401
+
+ def test_batch_no_ids_returns_400(self):
+ resp = client.post(
+ "/api/migrate/batch",
+ json={},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+ assert resp.status_code == 400
+
+ @respx.mock
+ def test_batch_returns_job_id(self):
+ """POST /api/migrate/batch returns a job_id immediately."""
+ with (
+ patch.object(migrate_module, "_download_adobe_template", new=_async_wrap(_mock_download)),
+ patch.object(migrate_module, "_load_compose", return_value=_mock_compose),
+ patch.object(migrate_module, "_run_validation", side_effect=_mock_validation_ok),
+ ):
+ resp = client.post(
+ "/api/migrate/batch",
+ json={"source_template_ids": ["id1", "id2"]},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+
+ assert resp.status_code == 200
+ body = resp.json()
+ assert "job_id" in body
+ assert body["total"] == 2
+ assert body["status"] == "queued"
+
+ @respx.mock
+ def test_batch_job_status_endpoint(self):
+ """GET /api/migrate/batch/{id} returns job state."""
+ with (
+ patch.object(migrate_module, "_download_adobe_template", new=_async_wrap(_mock_download)),
+ patch.object(migrate_module, "_load_compose", return_value=_mock_compose),
+ patch.object(migrate_module, "_run_validation", side_effect=_mock_validation_ok),
+ ):
+ post_resp = client.post(
+ "/api/migrate/batch",
+ json={"source_template_ids": ["id1"]},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+ job_id = post_resp.json()["job_id"]
+
+ get_resp = client.get(f"/api/migrate/batch/{job_id}")
+ assert get_resp.status_code == 200
+ assert get_resp.json()["job_id"] == job_id
+
+ def test_batch_unknown_job_returns_404(self):
+ resp = client.get("/api/migrate/batch/nonexistent-job-id")
+ assert resp.status_code == 404
+
+ @respx.mock
+ def test_batch_dry_run_option(self):
+ """Dry run in batch: no uploads, all results are dry_run."""
+ with (
+ patch.object(migrate_module, "_download_adobe_template", new=_async_wrap(_mock_download)),
+ patch.object(migrate_module, "_load_compose", return_value=_mock_compose),
+ patch.object(migrate_module, "_run_validation", side_effect=_mock_validation_ok),
+ ):
+ resp = client.post(
+ "/api/migrate/batch",
+ json={"source_template_ids": ["id1"], "options": {"dry_run": True}},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+
+ assert resp.status_code == 200
+ assert resp.json()["status"] == "queued"
diff --git a/tests/test_e2e.py b/tests/test_e2e.py
index 83c0f11..3b374bf 100644
--- a/tests/test_e2e.py
+++ b/tests/test_e2e.py
@@ -175,7 +175,8 @@ def test_full_migration_flow(temp_history):
):
migrate_resp2 = test_client.post(
"/api/migrate",
- json={"adobe_template_ids": [ADOBE_ID]},
+ # overwrite_if_exists=True so the second run updates the existing template
+ json={"adobe_template_ids": [ADOBE_ID], "options": {"overwrite_if_exists": True}},
cookies={_COOKIE_NAME: session_cookie},
)
diff --git a/tests/test_migration_options.py b/tests/test_migration_options.py
new file mode 100644
index 0000000..2af3d18
--- /dev/null
+++ b/tests/test_migration_options.py
@@ -0,0 +1,234 @@
+"""
+Tests for Phase 10: migration options (dryRun, overwriteIfExists, includeDocuments).
+"""
+
+import json
+import os
+from unittest.mock import patch
+
+import pytest
+import respx
+import httpx
+from fastapi.testclient import TestClient
+
+from web.app import app
+from web.session import _serializer, _COOKIE_NAME
+import web.routers.migrate as migrate_module
+
+client = TestClient(app, raise_server_exceptions=True)
+
+ADOBE_BASE = "https://api.eu2.adobesign.com/api/rest/v6"
+DS_BASE = "https://demo.docusign.net/restapi"
+DS_ACCOUNT = "test-account-id"
+TEMPLATE_NAME = "Options Test Template"
+ADOBE_ID = "opt-adobe-001"
+DS_EXISTING_ID = "ds-existing-opt-001"
+DS_NEW_ID = "ds-new-opt-001"
+
+
+def _full_session():
+ return _serializer.dumps({
+ "adobe_access_token": "adobe-tok",
+ "docusign_access_token": "ds-tok",
+ })
+
+
+@pytest.fixture(autouse=True)
+def patch_settings(monkeypatch):
+ import web.config as cfg
+ monkeypatch.setattr(cfg.settings, "docusign_account_id", DS_ACCOUNT)
+ monkeypatch.setattr(cfg.settings, "docusign_base_url", DS_BASE)
+ monkeypatch.setattr(cfg.settings, "adobe_sign_base_url", ADOBE_BASE)
+
+
+@pytest.fixture(autouse=True)
+def temp_history(tmp_path, monkeypatch):
+ history_path = str(tmp_path / ".history.json")
+ monkeypatch.setattr(migrate_module, "_HISTORY_FILE", history_path)
+ return history_path
+
+
+def _async_wrap(sync_fn):
+ async def wrapper(*args, **kwargs):
+ return sync_fn(*args, **kwargs)
+ return wrapper
+
+
+def _mock_download(template_id, access_token, output_dir):
+ os.makedirs(output_dir, exist_ok=True)
+ with open(os.path.join(output_dir, "metadata.json"), "w") as f:
+ json.dump({"name": TEMPLATE_NAME, "id": template_id}, f)
+ with open(os.path.join(output_dir, "form_fields.json"), "w") as f:
+ json.dump({"fields": []}, f)
+ with open(os.path.join(output_dir, "documents.json"), "w") as f:
+ json.dump({"documents": []}, f)
+ return True
+
+
+def _mock_compose(template_dir: str, output_path: str):
+ with open(output_path, "w") as f:
+ json.dump({"name": TEMPLATE_NAME, "description": "mocked"}, f)
+
+
+def _mock_validation_ok(download_dir):
+ return {"blockers": [], "warnings": [], "has_blockers": False}
+
+
+class TestDryRun:
+ @respx.mock
+ def test_dry_run_does_not_upload(self):
+ """dry_run=True: compose succeeds but no POST/PUT to DocuSign."""
+ with (
+ patch.object(migrate_module, "_download_adobe_template", new=_async_wrap(_mock_download)),
+ patch.object(migrate_module, "_load_compose", return_value=_mock_compose),
+ patch.object(migrate_module, "_run_validation", side_effect=_mock_validation_ok),
+ ):
+ resp = client.post(
+ "/api/migrate",
+ json={
+ "source_template_ids": [ADOBE_ID],
+ "options": {"dry_run": True},
+ },
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+
+ assert resp.status_code == 200
+ results = resp.json()["results"]
+ assert results[0]["status"] == "dry_run"
+ assert results[0]["action"] == "dry_run"
+ assert results[0]["docusign_template_id"] is None
+ assert results[0]["dry_run"] is True
+
+ @respx.mock
+ def test_dry_run_false_does_upload(self):
+ """dry_run=False (default): upload proceeds."""
+ respx.get(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates").mock(
+ return_value=httpx.Response(200, json={"envelopeTemplates": []})
+ )
+ respx.post(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates").mock(
+ return_value=httpx.Response(201, json={"templateId": DS_NEW_ID})
+ )
+ with (
+ patch.object(migrate_module, "_download_adobe_template", new=_async_wrap(_mock_download)),
+ patch.object(migrate_module, "_load_compose", return_value=_mock_compose),
+ patch.object(migrate_module, "_run_validation", side_effect=_mock_validation_ok),
+ ):
+ resp = client.post(
+ "/api/migrate",
+ json={"source_template_ids": [ADOBE_ID], "options": {"dry_run": False}},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+
+ assert resp.status_code == 200
+ assert resp.json()["results"][0]["status"] == "success"
+
+
+class TestOverwriteIfExists:
+ @respx.mock
+ def test_skip_when_overwrite_false(self):
+ """overwrite_if_exists=False + existing template → skipped."""
+ respx.get(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates").mock(
+ return_value=httpx.Response(200, json={
+ "envelopeTemplates": [
+ {"templateId": DS_EXISTING_ID, "name": TEMPLATE_NAME, "lastModified": "2026-04-10T00:00:00Z"}
+ ]
+ })
+ )
+ with (
+ patch.object(migrate_module, "_download_adobe_template", new=_async_wrap(_mock_download)),
+ patch.object(migrate_module, "_load_compose", return_value=_mock_compose),
+ patch.object(migrate_module, "_run_validation", side_effect=_mock_validation_ok),
+ ):
+ resp = client.post(
+ "/api/migrate",
+ json={"source_template_ids": [ADOBE_ID], "options": {"overwrite_if_exists": False}},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+
+ results = resp.json()["results"]
+ assert results[0]["status"] == "skipped"
+ assert results[0]["docusign_template_id"] == DS_EXISTING_ID
+
+ @respx.mock
+ def test_overwrite_when_true(self):
+ """overwrite_if_exists=True + existing template → PUT update."""
+ respx.get(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates").mock(
+ return_value=httpx.Response(200, json={
+ "envelopeTemplates": [
+ {"templateId": DS_EXISTING_ID, "name": TEMPLATE_NAME, "lastModified": "2026-04-10T00:00:00Z"}
+ ]
+ })
+ )
+ respx.put(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates/{DS_EXISTING_ID}").mock(
+ return_value=httpx.Response(200, json={})
+ )
+ with (
+ patch.object(migrate_module, "_download_adobe_template", new=_async_wrap(_mock_download)),
+ patch.object(migrate_module, "_load_compose", return_value=_mock_compose),
+ patch.object(migrate_module, "_run_validation", side_effect=_mock_validation_ok),
+ ):
+ resp = client.post(
+ "/api/migrate",
+ json={"source_template_ids": [ADOBE_ID], "options": {"overwrite_if_exists": True}},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+
+ assert resp.json()["results"][0]["action"] == "updated"
+
+
+class TestSourceTemplateIds:
+ @respx.mock
+ def test_source_template_ids_field(self):
+ """source_template_ids (new field) works correctly."""
+ respx.get(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates").mock(
+ return_value=httpx.Response(200, json={"envelopeTemplates": []})
+ )
+ respx.post(f"{DS_BASE}/v2.1/accounts/{DS_ACCOUNT}/templates").mock(
+ return_value=httpx.Response(201, json={"templateId": DS_NEW_ID})
+ )
+ with (
+ patch.object(migrate_module, "_download_adobe_template", new=_async_wrap(_mock_download)),
+ patch.object(migrate_module, "_load_compose", return_value=_mock_compose),
+ patch.object(migrate_module, "_run_validation", side_effect=_mock_validation_ok),
+ ):
+ resp = client.post(
+ "/api/migrate",
+ json={"source_template_ids": [ADOBE_ID]},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+ assert resp.status_code == 200
+ assert resp.json()["results"][0]["status"] == "success"
+
+ def test_no_ids_returns_400(self):
+ resp = client.post(
+ "/api/migrate",
+ json={},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+ assert resp.status_code == 400
+
+
+class TestValidationBlocking:
+ def test_blocked_template_not_uploaded(self):
+ """Template with validation blockers → status=blocked, no upload."""
+ def _mock_validation_blocked(download_dir):
+ return {
+ "blockers": ["No documents attached"],
+ "warnings": [],
+ "has_blockers": True,
+ }
+
+ with (
+ patch.object(migrate_module, "_download_adobe_template", new=_async_wrap(_mock_download)),
+ patch.object(migrate_module, "_run_validation", side_effect=_mock_validation_blocked),
+ ):
+ resp = client.post(
+ "/api/migrate",
+ json={"source_template_ids": [ADOBE_ID]},
+ cookies={_COOKIE_NAME: _full_session()},
+ )
+
+ assert resp.status_code == 200
+ result = resp.json()["results"][0]
+ assert result["status"] == "blocked"
+ assert "No documents" in result["error"]
diff --git a/tests/test_normalized_schema.py b/tests/test_normalized_schema.py
new file mode 100644
index 0000000..f88dc0f
--- /dev/null
+++ b/tests/test_normalized_schema.py
@@ -0,0 +1,139 @@
+"""
+Tests for Phase 8: normalized intermediate schema and mapping service.
+"""
+
+import json
+from pathlib import Path
+
+import pytest
+
+from src.models.normalized_template import (
+ ActionType,
+ NormalizedDocument,
+ NormalizedField,
+ NormalizedRole,
+ NormalizedTemplate,
+)
+from src.services.mapping_service import adobe_folder_to_normalized
+
+
+DOWNLOADS = Path(__file__).parent.parent / "downloads"
+DAVID_DIR = DOWNLOADS / "David Tag Demo Form__CBJCHBCA"
+NDA_DIR = DOWNLOADS / "_DEMO USE ONLY_ NDA__CBJCHBCA"
+ROB_DIR = DOWNLOADS / "Rob Test__CBJCHBCA"
+
+
+# ---------------------------------------------------------------------------
+# Model construction
+# ---------------------------------------------------------------------------
+
+class TestNormalizedModels:
+ def test_normalized_role_defaults(self):
+ r = NormalizedRole(name="Customer", order=1)
+ assert r.action_type == ActionType.SIGN
+ assert r.order == 1
+
+ def test_normalized_field_defaults(self):
+ f = NormalizedField(type="text", label="Name", page=1, x=10, y=20, width=120, height=24)
+ assert f.required is False
+ assert f.read_only is False
+ assert f.options == []
+ assert f.conditional_parent_label is None
+
+ def test_normalized_template_construction(self):
+ t = NormalizedTemplate(
+ name="My Template",
+ roles=[NormalizedRole(name="Signer 1", order=1)],
+ fields=[
+ NormalizedField(type="signature", label="sig1", page=1, x=0, y=0, width=140, height=28)
+ ],
+ )
+ assert t.name == "My Template"
+ assert len(t.roles) == 1
+ assert len(t.fields) == 1
+
+ def test_role_names(self):
+ t = NormalizedTemplate(
+ name="T",
+ roles=[
+ NormalizedRole(name="Customer", order=1),
+ NormalizedRole(name="Company", order=2),
+ ],
+ )
+ assert t.role_names() == ["Customer", "Company"]
+
+ def test_fields_for_role(self):
+ t = NormalizedTemplate(
+ name="T",
+ roles=[NormalizedRole(name="Signer 1", order=1)],
+ fields=[
+ NormalizedField(type="signature", label="s1", page=1, x=0, y=0, width=140, height=28, role_name="Signer 1"),
+ NormalizedField(type="text", label="name", page=1, x=0, y=50, width=120, height=24, role_name="Signer 2"),
+ ],
+ )
+ assert len(t.fields_for_role("Signer 1")) == 1
+ assert len(t.fields_for_role("Signer 2")) == 1
+ assert len(t.fields_for_role("Nobody")) == 0
+
+ def test_normalized_document_checksum(self):
+ doc = NormalizedDocument(
+ name="test.pdf",
+ content_base64="dGVzdA==",
+ checksum_sha256="9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08",
+ )
+ assert doc.checksum_sha256 != ""
+
+ def test_serialization_roundtrip(self):
+ t = NormalizedTemplate(
+ name="Round Trip",
+ roles=[NormalizedRole(name="Signer 1", order=1)],
+ )
+ dumped = t.model_dump()
+ restored = NormalizedTemplate(**dumped)
+ assert restored.name == t.name
+ assert len(restored.roles) == 1
+
+
+# ---------------------------------------------------------------------------
+# Mapping service — requires real download fixtures
+# ---------------------------------------------------------------------------
+
+@pytest.mark.skipif(not DAVID_DIR.exists(), reason="Downloads fixtures not present")
+class TestMappingService:
+ def test_david_template_normalizes(self):
+ norm, warnings = adobe_folder_to_normalized(str(DAVID_DIR))
+ assert isinstance(norm, NormalizedTemplate)
+ assert norm.name != ""
+ assert len(norm.roles) >= 1
+ assert len(norm.fields) > 0
+
+ def test_david_fields_have_roles(self):
+ norm, _ = adobe_folder_to_normalized(str(DAVID_DIR))
+ role_names = norm.role_names()
+ for f in norm.fields:
+ assert f.role_name in role_names, f"Field '{f.label}' has unresolved role '{f.role_name}'"
+
+ def test_david_documents_have_checksum(self):
+ norm, _ = adobe_folder_to_normalized(str(DAVID_DIR))
+ assert len(norm.documents) >= 1
+ for doc in norm.documents:
+ assert doc.checksum_sha256 != "", f"Document '{doc.name}' missing checksum"
+ assert len(doc.checksum_sha256) == 64 # SHA-256 hex
+
+ def test_exclude_documents_option(self):
+ norm, _ = adobe_folder_to_normalized(str(DAVID_DIR), include_documents=False)
+ for doc in norm.documents:
+ assert doc.content_base64 == ""
+ # checksum still computed even when content excluded
+ assert doc.checksum_sha256 != ""
+
+ @pytest.mark.skipif(not NDA_DIR.exists(), reason="NDA fixture not present")
+ def test_nda_template_normalizes(self):
+ norm, _ = adobe_folder_to_normalized(str(NDA_DIR))
+ assert norm.name != ""
+ assert len(norm.fields) > 0
+
+ @pytest.mark.skipif(not ROB_DIR.exists(), reason="Rob fixture not present")
+ def test_rob_template_normalizes(self):
+ norm, _ = adobe_folder_to_normalized(str(ROB_DIR))
+ assert norm.name != ""
diff --git a/tests/test_regression.py b/tests/test_regression.py
index 65d923e..588d568 100644
--- a/tests/test_regression.py
+++ b/tests/test_regression.py
@@ -55,7 +55,7 @@ def test_compose_regression(template_name, update_snapshots):
output_path = tf.name
try:
- result, warnings = compose_template(template_dir, output_path)
+ result, warnings, _ = compose_template(template_dir, output_path)
if update_snapshots:
os.makedirs(FIXTURES_DIR, exist_ok=True)
@@ -121,7 +121,7 @@ def test_no_tabs_lost_on_recompose():
with tempfile.NamedTemporaryFile(suffix=".json", delete=False) as tf:
output_path = tf.name
try:
- result, _ = compose_template(template_dir, output_path)
+ result, _, _issues = compose_template(template_dir, output_path)
total_tabs = sum(_count_tabs(result).values())
assert total_tabs > 0, f"No tabs produced for {template_name}"
finally:
diff --git a/tests/test_retry.py b/tests/test_retry.py
new file mode 100644
index 0000000..fbadc0d
--- /dev/null
+++ b/tests/test_retry.py
@@ -0,0 +1,152 @@
+"""
+Tests for Phase 11: retry with backoff utility.
+"""
+
+import asyncio
+import time
+from unittest.mock import MagicMock, patch
+
+import pytest
+
+from src.utils.retry import (
+ RateLimitError,
+ async_retry_with_backoff,
+ check_response_retryable,
+ retry_with_backoff,
+)
+
+
+class TestRetryWithBackoff:
+ def test_success_on_first_try(self):
+ call_count = {"n": 0}
+
+ @retry_with_backoff(max_retries=3, base_delay=0.01)
+ def fn():
+ call_count["n"] += 1
+ return "ok"
+
+ result = fn()
+ assert result == "ok"
+ assert call_count["n"] == 1
+
+ def test_retries_on_exception(self):
+ call_count = {"n": 0}
+
+ @retry_with_backoff(max_retries=2, base_delay=0.01)
+ def fn():
+ call_count["n"] += 1
+ if call_count["n"] < 3:
+ raise ConnectionError("transient")
+ return "ok"
+
+ with patch("src.utils.retry.time.sleep"):
+ result = fn()
+
+ assert result == "ok"
+ assert call_count["n"] == 3
+
+ def test_raises_after_max_retries(self):
+ @retry_with_backoff(max_retries=2, base_delay=0.01)
+ def fn():
+ raise ConnectionError("always fails")
+
+ with patch("src.utils.retry.time.sleep"):
+ with pytest.raises(ConnectionError):
+ fn()
+
+ def test_exponential_delay(self):
+ sleeps = []
+
+ @retry_with_backoff(max_retries=3, base_delay=1.0)
+ def fn():
+ raise ValueError("fail")
+
+ with patch("src.utils.retry.time.sleep", side_effect=lambda d: sleeps.append(d)):
+ with pytest.raises(ValueError):
+ fn()
+
+ assert len(sleeps) == 3
+ assert sleeps[0] == 1.0
+ assert sleeps[1] == 2.0
+ assert sleeps[2] == 4.0
+
+ def test_max_delay_capped(self):
+ sleeps = []
+
+ @retry_with_backoff(max_retries=5, base_delay=10.0, max_delay=15.0)
+ def fn():
+ raise ValueError("fail")
+
+ with patch("src.utils.retry.time.sleep", side_effect=lambda d: sleeps.append(d)):
+ with pytest.raises(ValueError):
+ fn()
+
+ assert all(d <= 15.0 for d in sleeps)
+
+ def test_only_retries_specified_exceptions(self):
+ call_count = {"n": 0}
+
+ @retry_with_backoff(max_retries=3, base_delay=0.01, retryable_exceptions=(ConnectionError,))
+ def fn():
+ call_count["n"] += 1
+ raise ValueError("not retryable")
+
+ with pytest.raises(ValueError):
+ fn()
+
+ assert call_count["n"] == 1 # no retries for ValueError
+
+
+class TestAsyncRetryWithBackoff:
+ def test_async_success_on_first_try(self):
+ call_count = {"n": 0}
+
+ @async_retry_with_backoff(max_retries=3, base_delay=0.01)
+ async def fn():
+ call_count["n"] += 1
+ return "ok"
+
+ result = asyncio.get_event_loop().run_until_complete(fn())
+ assert result == "ok"
+ assert call_count["n"] == 1
+
+ def test_async_retries_on_exception(self):
+ call_count = {"n": 0}
+
+ @async_retry_with_backoff(max_retries=2, base_delay=0.01)
+ async def fn():
+ call_count["n"] += 1
+ if call_count["n"] < 3:
+ raise ConnectionError("transient")
+ return "ok"
+
+ with patch("src.utils.retry.asyncio.sleep", new=asyncio.coroutine(lambda d: None)):
+ result = asyncio.get_event_loop().run_until_complete(fn())
+
+ assert result == "ok"
+
+ def test_async_raises_after_max_retries(self):
+ @async_retry_with_backoff(max_retries=1, base_delay=0.01)
+ async def fn():
+ raise ConnectionError("always fails")
+
+ with patch("src.utils.retry.asyncio.sleep", new=asyncio.coroutine(lambda d: None)):
+ with pytest.raises(ConnectionError):
+ asyncio.get_event_loop().run_until_complete(fn())
+
+
+class TestCheckResponseRetryable:
+ def test_429_is_retryable(self):
+ assert check_response_retryable(429) is True
+
+ def test_503_is_retryable(self):
+ assert check_response_retryable(503) is True
+
+ def test_200_not_retryable(self):
+ assert check_response_retryable(200) is False
+
+ def test_400_not_retryable(self):
+ assert check_response_retryable(400) is False
+
+ def test_404_not_retryable(self):
+ assert check_response_retryable(404) is False
diff --git a/tests/test_security.py b/tests/test_security.py
new file mode 100644
index 0000000..101df23
--- /dev/null
+++ b/tests/test_security.py
@@ -0,0 +1,138 @@
+"""
+Tests for Phase 12: security — log sanitization and audit trail.
+"""
+
+import hashlib
+import json
+import logging
+
+import pytest
+
+from src.utils.log_sanitizer import (
+ SanitizingFilter,
+ install_sanitizing_filter,
+ redact,
+ redact_dict,
+)
+
+
+class TestRedact:
+ def test_bearer_token_redacted(self):
+ text = "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.abc.def"
+ result = redact(text)
+ assert "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9" not in result
+ assert "[REDACTED]" in result
+
+ def test_access_token_assignment_redacted(self):
+ text = 'access_token: "super_secret_value_12345"'
+ result = redact(text)
+ assert "super_secret_value_12345" not in result
+ assert "[REDACTED]" in result
+
+ def test_password_redacted(self):
+ text = "password=hunter2supersecure"
+ result = redact(text)
+ assert "hunter2supersecure" not in result
+
+ def test_safe_text_unchanged(self):
+ text = "Template migrated successfully: NDA v2"
+ result = redact(text)
+ assert result == text
+
+ def test_long_base64_redacted(self):
+ # Simulate a long PDF base64 payload being logged
+ b64 = "A" * 600
+ result = redact(b64)
+ assert "A" * 100 not in result
+ assert "[REDACTED]" in result
+
+ def test_short_base64_not_redacted(self):
+ # Short base64 (e.g. an ID) should not be redacted
+ short_b64 = "dGVzdA==" # "test" base64
+ result = redact(short_b64)
+ assert "dGVzdA" in result
+
+
+class TestRedactDict:
+ def test_token_key_redacted(self):
+ d = {"access_token": "secret123", "name": "My Template"}
+ result = redact_dict(d)
+ assert result["access_token"] == "[REDACTED]"
+ assert result["name"] == "My Template"
+
+ def test_nested_dict_redacted(self):
+ d = {"auth": {"token": "secret123", "user": "alice"}}
+ result = redact_dict(d)
+ assert result["auth"]["token"] == "[REDACTED]"
+ assert result["auth"]["user"] == "alice"
+
+ def test_document_base64_redacted(self):
+ d = {"documentBase64": "A" * 200}
+ result = redact_dict(d)
+ assert result["documentBase64"] == "[REDACTED]"
+
+ def test_list_of_dicts_redacted(self):
+ d = {"items": [{"token": "abc123xyz", "id": "1"}]}
+ result = redact_dict(d)
+ assert result["items"][0]["token"] == "[REDACTED]"
+ assert result["items"][0]["id"] == "1"
+
+ def test_safe_dict_unchanged(self):
+ d = {"template_name": "NDA", "status": "success", "count": 3}
+ result = redact_dict(d)
+ assert result == d
+
+
+class TestSanitizingFilter:
+ def test_filter_redacts_log_message(self):
+ record = logging.LogRecord(
+ name="test", level=logging.INFO,
+ pathname="", lineno=0,
+ msg="Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.payload.signature",
+ args=(), exc_info=None,
+ )
+ f = SanitizingFilter()
+ f.filter(record)
+ assert "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9" not in record.msg
+
+ def test_filter_redacts_args(self):
+ record = logging.LogRecord(
+ name="test", level=logging.INFO,
+ pathname="", lineno=0,
+ msg="Token: %s",
+ args=("access_token=supersecretvalue123456",),
+ exc_info=None,
+ )
+ f = SanitizingFilter()
+ f.filter(record)
+ assert "supersecretvalue123456" not in str(record.args)
+
+ def test_install_sanitizing_filter_idempotent(self):
+ install_sanitizing_filter()
+ install_sanitizing_filter() # second call should not add duplicate
+ root = logging.getLogger()
+ sanitizing_filters = [f for f in root.filters if isinstance(f, SanitizingFilter)]
+ assert len(sanitizing_filters) == 1
+ # Clean up
+ for f in sanitizing_filters:
+ root.removeFilter(f)
+
+
+class TestPdfChecksum:
+ def test_checksum_matches_content(self):
+ from src.services.mapping_service import adobe_folder_to_normalized
+ from pathlib import Path
+
+ downloads = Path(__file__).parent.parent / "downloads" / "David Tag Demo Form__CBJCHBCA"
+ if not downloads.exists():
+ pytest.skip("Downloads fixtures not present")
+
+ norm, _ = adobe_folder_to_normalized(str(downloads))
+ assert norm.documents, "Expected at least one document"
+
+ doc = norm.documents[0]
+ # Recompute checksum from source path to verify
+ import base64
+ pdf_bytes = Path(doc.source_path).read_bytes()
+ expected_checksum = hashlib.sha256(pdf_bytes).hexdigest()
+ assert doc.checksum_sha256 == expected_checksum
diff --git a/tests/test_validation_service.py b/tests/test_validation_service.py
new file mode 100644
index 0000000..4337cbf
--- /dev/null
+++ b/tests/test_validation_service.py
@@ -0,0 +1,181 @@
+"""
+Tests for Phase 9: validation service.
+"""
+
+import pytest
+
+from src.models.normalized_template import (
+ NormalizedDocument,
+ NormalizedField,
+ NormalizedRole,
+ NormalizedTemplate,
+)
+from src.services.validation_service import (
+ ValidationResult,
+ compare_field_counts,
+ validate_template,
+)
+from src.reports.report_builder import (
+ MigrationReport,
+ MigrationStatus,
+ build_blocked_report,
+ build_error_report,
+ build_skipped_report,
+ build_success_report,
+)
+
+
+def _make_template(**kwargs) -> NormalizedTemplate:
+ defaults = dict(
+ name="Test Template",
+ roles=[NormalizedRole(name="Signer 1", order=1)],
+ fields=[
+ NormalizedField(
+ type="signature", label="sig1", page=1,
+ x=100, y=500, width=140, height=28,
+ role_name="Signer 1",
+ )
+ ],
+ documents=[NormalizedDocument(name="test.pdf", checksum_sha256="abc", source_path="/fake.pdf")],
+ )
+ defaults.update(kwargs)
+ return NormalizedTemplate(**defaults)
+
+
+class TestValidationService:
+ def test_valid_template_passes(self):
+ t = _make_template()
+ result = validate_template(t)
+ assert result.is_ok()
+ assert result.blockers == []
+
+ def test_no_recipients_is_blocker(self):
+ t = _make_template(roles=[])
+ result = validate_template(t)
+ assert result.has_blockers()
+ assert any("recipient" in b.lower() or "role" in b.lower() for b in result.blockers)
+
+ def test_no_documents_is_blocker(self):
+ t = _make_template(documents=[])
+ result = validate_template(t)
+ assert result.has_blockers()
+ assert any("document" in b.lower() for b in result.blockers)
+
+ def test_no_fields_is_warning(self):
+ t = _make_template(fields=[])
+ result = validate_template(t)
+ assert result.is_ok() # not a blocker
+ assert any("0 field" in w or "empty" in w.lower() for w in result.warnings)
+
+ def test_no_signature_field_is_warning(self):
+ t = _make_template(fields=[
+ NormalizedField(type="text", label="name", page=1, x=0, y=0, width=120, height=24, role_name="Signer 1")
+ ])
+ result = validate_template(t)
+ assert result.is_ok()
+ assert any("signature" in w.lower() for w in result.warnings)
+
+ def test_field_with_unknown_role_is_warning(self):
+ t = _make_template(fields=[
+ NormalizedField(
+ type="signature", label="sig1", page=1, x=0, y=0,
+ width=140, height=28, role_name="NonExistentRole"
+ )
+ ])
+ result = validate_template(t)
+ assert result.is_ok()
+ assert any("role" in w.lower() or "assign" in w.lower() for w in result.warnings)
+
+ def test_unsupported_features_become_warnings(self):
+ t = _make_template(unsupported_features=["Conditional HIDE action", "Webhook associations"])
+ result = validate_template(t)
+ assert result.is_ok()
+ assert len([w for w in result.warnings if "Unsupported" in w or "manual" in w.lower()]) >= 2
+
+ def test_validation_result_all_issues(self):
+ r = ValidationResult(blockers=["blocker1"], warnings=["warn1"])
+ issues = r.all_issues()
+ assert any("BLOCKER" in i for i in issues)
+ assert any("WARNING" in i for i in issues)
+
+
+class TestCompareFieldCounts:
+ def test_matching_counts_no_warnings(self):
+ t = _make_template(fields=[
+ NormalizedField(type="signature", label="sig1", page=1, x=0, y=0, width=140, height=28, role_name="Signer 1")
+ ])
+ ds = {
+ "recipients": {
+ "signers": [{"tabs": {"signHereTabs": [{"tabLabel": "sig1"}]}}]
+ }
+ }
+ result = compare_field_counts(t, ds)
+ assert result.is_ok()
+
+ def test_mismatched_counts_warns(self):
+ t = _make_template(fields=[
+ NormalizedField(type="signature", label="s1", page=1, x=0, y=0, width=140, height=28, role_name="Signer 1"),
+ NormalizedField(type="text", label="t1", page=1, x=0, y=50, width=120, height=24, role_name="Signer 1"),
+ ])
+ ds = {"recipients": {"signers": [{"tabs": {"signHereTabs": [{}]}}]}}
+ result = compare_field_counts(t, ds)
+ assert any("mismatch" in w.lower() or "count" in w.lower() for w in result.warnings)
+
+ def test_zero_tabs_with_fields_warns(self):
+ t = _make_template()
+ ds = {"recipients": {"signers": []}}
+ result = compare_field_counts(t, ds)
+ assert result.warnings # should warn about 0 tabs
+
+
+class TestReportBuilder:
+ def test_success_report(self):
+ r = build_success_report("My Template", "src_001", "ds_001", warnings=[])
+ assert r.status == MigrationStatus.SUCCESS
+ assert r.docusign_template_id == "ds_001"
+
+ def test_success_with_warnings(self):
+ r = build_success_report("My Template", "src_001", "ds_001", warnings=["some warning"])
+ assert r.status == MigrationStatus.SUCCESS_WITH_WARNINGS
+
+ def test_blocked_report(self):
+ r = build_blocked_report("T", "id1", blockers=["no docs"], warnings=[])
+ assert r.status == MigrationStatus.BLOCKED
+ assert r.blockers == ["no docs"]
+
+ def test_error_report(self):
+ r = build_error_report("T", "id1", error="Connection refused")
+ assert r.status == MigrationStatus.ERROR
+ assert "Connection" in r.error
+
+ def test_skipped_report(self):
+ r = build_skipped_report("T", "id1", reason="already migrated")
+ assert r.status == MigrationStatus.SKIPPED
+
+ def test_migration_report_summary(self):
+ report = MigrationReport()
+ report.add(build_success_report("T1", "1", "ds1", []))
+ report.add(build_success_report("T2", "2", "ds2", ["warn"]))
+ report.add(build_error_report("T3", "3", "fail"))
+ summary = report.summary()
+ assert summary["total"] == 3
+ assert summary.get("success", 0) == 1
+ assert summary.get("error", 0) == 1
+
+ def test_report_to_dict(self):
+ report = MigrationReport()
+ report.add(build_success_report("T1", "1", "ds1", []))
+ d = report.to_dict()
+ assert "summary" in d
+ assert "templates" in d
+ assert d["templates"][0]["template_name"] == "T1"
+
+ def test_report_has_errors(self):
+ report = MigrationReport()
+ report.add(build_error_report("T", "1", "err"))
+ assert report.has_errors()
+
+ def test_report_no_errors(self):
+ report = MigrationReport()
+ report.add(build_success_report("T", "1", "ds1", []))
+ assert not report.has_errors()
diff --git a/web/app.py b/web/app.py
index 7fd6ca6..e561ac1 100644
--- a/web/app.py
+++ b/web/app.py
@@ -15,7 +15,7 @@ from fastapi.responses import FileResponse
import os
from web.config import settings
-from web.routers import auth, templates, migrate
+from web.routers import auth, templates, migrate, verify
app = FastAPI(
title="Adobe Sign → DocuSign Migrator",
@@ -24,9 +24,10 @@ app = FastAPI(
)
# Routers
-app.include_router(auth.router, prefix="/api/auth", tags=["auth"])
+app.include_router(auth.router, prefix="/api/auth", tags=["auth"])
app.include_router(templates.router, prefix="/api/templates", tags=["templates"])
-app.include_router(migrate.router, prefix="/api/migrate", tags=["migrate"])
+app.include_router(migrate.router, prefix="/api/migrate", tags=["migrate"])
+app.include_router(verify.router, prefix="/api/verify", tags=["verify"])
# Static files (frontend)
_static_dir = os.path.join(os.path.dirname(__file__), "static")
diff --git a/web/routers/migrate.py b/web/routers/migrate.py
index 5c93955..8437a7d 100644
--- a/web/routers/migrate.py
+++ b/web/routers/migrate.py
@@ -3,8 +3,10 @@ web/routers/migrate.py
----------------------
Migration trigger and history endpoints.
-POST /api/migrate — run the pipeline for one or more Adobe template IDs
-GET /api/migrate/history — return past migration records
+POST /api/migrate — run the pipeline for one or more Adobe template IDs
+POST /api/migrate/batch — batch migration with async progress tracking
+GET /api/migrate/batch/{id} — poll batch job status
+GET /api/migrate/history — return past migration records
"""
import asyncio
@@ -12,8 +14,9 @@ import json
import os
import sys
import tempfile
+import uuid
from datetime import datetime, timezone
-from typing import List, Optional
+from typing import Dict, List, Optional
import httpx
from fastapi import APIRouter, Request
@@ -23,7 +26,6 @@ from pydantic import BaseModel
from web.config import settings
from web.session import get_session
-# Ensure src/ is on path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "..", "src"))
router = APIRouter()
@@ -32,9 +34,26 @@ _HISTORY_FILE = os.path.join(
os.path.dirname(__file__), "..", "..", "migration-output", ".history.json"
)
+# In-memory batch job store (keyed by job_id)
+_batch_jobs: Dict[str, dict] = {}
+
+
+class MigrationOptions(BaseModel):
+ dry_run: bool = False
+ overwrite_if_exists: bool = False
+ include_documents: bool = True
+
class MigrateRequest(BaseModel):
- adobe_template_ids: List[str]
+ # Primary API (blueprint-aligned)
+ source_template_ids: Optional[List[str]] = None
+ target_folder: Optional[str] = None
+ options: MigrationOptions = MigrationOptions()
+ # Legacy field kept for backward compatibility
+ adobe_template_ids: Optional[List[str]] = None
+
+ def resolved_ids(self) -> List[str]:
+ return self.source_template_ids or self.adobe_template_ids or []
def _load_history() -> list:
@@ -51,10 +70,7 @@ def _save_history(records: list) -> None:
def _load_compose():
- """
- Dynamically load and return the compose_template function from src/.
- Isolated in its own function so tests can patch it without touching the file system.
- """
+ """Dynamically load compose_template from src/."""
import importlib.util
spec = importlib.util.spec_from_file_location(
"compose_docusign_template",
@@ -71,21 +87,17 @@ async def _download_adobe_template(template_id: str, access_token: str, output_d
base = settings.adobe_sign_base_url
async with httpx.AsyncClient() as client:
- # Metadata
meta_resp = await client.get(f"{base}/libraryDocuments/{template_id}", headers=headers)
if not meta_resp.is_success:
return False
metadata = meta_resp.json()
- # Form fields
fields_resp = await client.get(f"{base}/libraryDocuments/{template_id}/formFields", headers=headers)
form_fields = fields_resp.json() if fields_resp.is_success else {"fields": []}
- # Documents list
docs_resp = await client.get(f"{base}/libraryDocuments/{template_id}/documents", headers=headers)
documents = docs_resp.json() if docs_resp.is_success else {"documents": []}
- # Download first PDF
doc_list = documents.get("documents", [])
pdf_bytes = b""
if doc_list:
@@ -111,10 +123,27 @@ async def _download_adobe_template(template_id: str, access_token: str, output_d
return True
+def _run_validation(download_dir: str) -> dict:
+ """Run validation service on downloaded template, return summary."""
+ try:
+ from src.services.mapping_service import adobe_folder_to_normalized
+ from src.services.validation_service import validate_template
+ norm, _ = adobe_folder_to_normalized(download_dir)
+ result = validate_template(norm)
+ return {
+ "blockers": result.blockers,
+ "warnings": result.warnings,
+ "has_blockers": result.has_blockers(),
+ }
+ except Exception as exc:
+ return {"blockers": [], "warnings": [f"Validation skipped: {exc}"], "has_blockers": False}
+
+
async def _migrate_one(
adobe_id: str,
adobe_access_token: str,
docusign_access_token: str,
+ options: MigrationOptions,
) -> dict:
"""Run the full pipeline for one Adobe template. Returns a result record."""
timestamp = datetime.now(timezone.utc).isoformat()
@@ -134,18 +163,42 @@ async def _migrate_one(
"action": None,
"status": "failed",
"error": "Adobe Sign download failed",
+ "warnings": [],
+ "blockers": [],
+ "field_issues": [],
+ "dry_run": options.dry_run,
}
- # Read template name from metadata
with open(os.path.join(download_dir, "metadata.json")) as f:
metadata = json.load(f)
template_name = metadata.get("name", adobe_id)
- # 2. Compose DocuSign template JSON
+ # 2. Validate
+ validation = _run_validation(download_dir)
+ if validation["has_blockers"]:
+ return {
+ "timestamp": timestamp,
+ "adobe_template_id": adobe_id,
+ "adobe_template_name": template_name,
+ "docusign_template_id": None,
+ "action": "blocked",
+ "status": "blocked",
+ "error": f"Validation blockers: {'; '.join(validation['blockers'])}",
+ "warnings": validation["warnings"],
+ "blockers": validation["blockers"],
+ "field_issues": [],
+ "dry_run": options.dry_run,
+ }
+
+ # 3. Compose
composed_file = os.path.join(tmpdir, "docusign-template.json")
+ compose_issues: list = []
try:
compose_fn = _load_compose()
- compose_fn(download_dir, composed_file)
+ compose_result = compose_fn(download_dir, composed_file)
+ # compose_template returns (template, warnings, issues)
+ if isinstance(compose_result, tuple) and len(compose_result) >= 3:
+ compose_issues = compose_result[2] or []
except Exception as exc:
return {
"timestamp": timestamp,
@@ -155,6 +208,10 @@ async def _migrate_one(
"action": None,
"status": "failed",
"error": f"Compose failed: {exc}",
+ "warnings": validation["warnings"],
+ "blockers": [],
+ "field_issues": [],
+ "dry_run": options.dry_run,
}
if not os.path.exists(composed_file):
return {
@@ -165,12 +222,36 @@ async def _migrate_one(
"action": None,
"status": "failed",
"error": "Compose produced no output file",
+ "warnings": validation["warnings"],
+ "blockers": [],
+ "field_issues": [],
+ "dry_run": options.dry_run,
}
- # 3. Upload (upsert) to DocuSign using web session token
+ # 4. Dry run — stop here, do not upload
+ if options.dry_run:
+ return {
+ "timestamp": timestamp,
+ "adobe_template_id": adobe_id,
+ "adobe_template_name": template_name,
+ "docusign_template_id": None,
+ "action": "dry_run",
+ "status": "dry_run",
+ "error": None,
+ "warnings": validation["warnings"],
+ "blockers": [],
+ "field_issues": compose_issues,
+ "dry_run": True,
+ }
+
+ # 5. Upload (upsert) to DocuSign
with open(composed_file) as f:
template_json = json.load(f)
+ if not options.include_documents:
+ for doc in template_json.get("documents", []):
+ doc.pop("documentBase64", None)
+
ds_headers = {
"Authorization": f"Bearer {docusign_access_token}",
"Content-Type": "application/json",
@@ -179,7 +260,7 @@ async def _migrate_one(
list_url = f"{settings.docusign_base_url}/v2.1/accounts/{settings.docusign_account_id}/templates"
async with httpx.AsyncClient() as client:
- # Find existing
+ # Duplicate detection
list_resp = await client.get(
list_url, headers=ds_headers, params={"search_text": template_name, "count": 100}
)
@@ -191,6 +272,22 @@ async def _migrate_one(
exact.sort(key=lambda t: t.get("lastModified", ""), reverse=True)
existing_id = exact[0]["templateId"]
+ # Skip if already exists and overwrite is disabled
+ if existing_id and not options.overwrite_if_exists:
+ return {
+ "timestamp": timestamp,
+ "adobe_template_id": adobe_id,
+ "adobe_template_name": template_name,
+ "docusign_template_id": existing_id,
+ "action": "skipped",
+ "status": "skipped",
+ "error": None,
+ "warnings": validation["warnings"] + ["Skipped: template already exists (overwrite_if_exists=false)"],
+ "blockers": [],
+ "field_issues": compose_issues,
+ "dry_run": False,
+ }
+
if existing_id:
up_resp = await client.put(
f"{list_url}/{existing_id}", headers=ds_headers, json=template_json
@@ -211,6 +308,10 @@ async def _migrate_one(
"action": None,
"status": "failed",
"error": f"DocuSign upload failed ({up_resp.status_code}): {up_resp.text[:200]}",
+ "warnings": validation["warnings"],
+ "blockers": [],
+ "field_issues": compose_issues,
+ "dry_run": False,
}
return {
@@ -221,6 +322,10 @@ async def _migrate_one(
"action": action,
"status": "success",
"error": None,
+ "warnings": validation["warnings"],
+ "blockers": [],
+ "field_issues": compose_issues,
+ "dry_run": False,
}
@@ -233,17 +338,21 @@ async def run_migration(body: MigrateRequest, request: Request):
if not session.get("docusign_access_token"):
return JSONResponse({"error": "not authenticated to DocuSign"}, status_code=401)
+ ids = body.resolved_ids()
+ if not ids:
+ return JSONResponse({"error": "no template IDs provided"}, status_code=400)
+
tasks = [
_migrate_one(
aid,
session["adobe_access_token"],
session["docusign_access_token"],
+ body.options,
)
- for aid in body.adobe_template_ids
+ for aid in ids
]
results = await asyncio.gather(*tasks)
- # Append to history
history = _load_history()
history.extend(results)
_save_history(history)
@@ -255,3 +364,101 @@ async def run_migration(body: MigrateRequest, request: Request):
def migration_history():
"""Return all past migration records."""
return {"history": _load_history()}
+
+
+# ---------------------------------------------------------------------------
+# Batch migration
+# ---------------------------------------------------------------------------
+
+async def _run_batch_job(
+ job_id: str,
+ ids: List[str],
+ adobe_token: str,
+ ds_token: str,
+ options: MigrationOptions,
+) -> None:
+ """Background coroutine that processes a batch job and updates _batch_jobs."""
+ job = _batch_jobs[job_id]
+ job["status"] = "running"
+ results = []
+
+ for i, adobe_id in enumerate(ids):
+ job["progress"] = {"completed": i, "total": len(ids), "current_id": adobe_id}
+ result = await _migrate_one(adobe_id, adobe_token, ds_token, options)
+
+ # Retry once on transient failures (network errors, not validation blockers)
+ if result["status"] == "failed" and "upload failed" in (result.get("error") or ""):
+ result = await _migrate_one(adobe_id, adobe_token, ds_token, options)
+ if result["status"] != "failed":
+ result["retried"] = True
+
+ results.append(result)
+ job["results"] = results
+
+ # Persist to history
+ history = _load_history()
+ history.extend(results)
+ _save_history(history)
+
+ success = sum(1 for r in results if r["status"] == "success")
+ failed = sum(1 for r in results if r["status"] in ("failed", "blocked"))
+ skipped = sum(1 for r in results if r["status"] == "skipped")
+ dry_runs = sum(1 for r in results if r["status"] == "dry_run")
+
+ job["status"] = "completed"
+ job["progress"] = {"completed": len(ids), "total": len(ids), "current_id": None}
+ job["summary"] = {
+ "total": len(ids),
+ "success": success,
+ "failed": failed,
+ "skipped": skipped,
+ "dry_run": dry_runs,
+ }
+
+
+@router.post("/batch")
+async def run_batch_migration(body: MigrateRequest, request: Request):
+ """
+ Start an async batch migration job. Returns a job_id immediately.
+ Poll GET /api/migrate/batch/{job_id} for status.
+ """
+ session = get_session(request)
+ if not session.get("adobe_access_token"):
+ return JSONResponse({"error": "not authenticated to Adobe Sign"}, status_code=401)
+ if not session.get("docusign_access_token"):
+ return JSONResponse({"error": "not authenticated to DocuSign"}, status_code=401)
+
+ ids = body.resolved_ids()
+ if not ids:
+ return JSONResponse({"error": "no template IDs provided"}, status_code=400)
+
+ job_id = str(uuid.uuid4())
+ _batch_jobs[job_id] = {
+ "job_id": job_id,
+ "status": "queued",
+ "total": len(ids),
+ "results": [],
+ "progress": {"completed": 0, "total": len(ids), "current_id": None},
+ "summary": None,
+ "created_at": datetime.now(timezone.utc).isoformat(),
+ }
+
+ asyncio.create_task(
+ _run_batch_job(
+ job_id, ids,
+ session["adobe_access_token"],
+ session["docusign_access_token"],
+ body.options,
+ )
+ )
+
+ return {"job_id": job_id, "total": len(ids), "status": "queued"}
+
+
+@router.get("/batch/{job_id}")
+def get_batch_status(job_id: str):
+ """Poll the status of a batch migration job."""
+ job = _batch_jobs.get(job_id)
+ if not job:
+ return JSONResponse({"error": "batch job not found"}, status_code=404)
+ return job
diff --git a/web/routers/templates.py b/web/routers/templates.py
index 141cae7..378bde8 100644
--- a/web/routers/templates.py
+++ b/web/routers/templates.py
@@ -6,6 +6,7 @@ Computes per-template migration status for the side-by-side UI.
"""
from datetime import datetime, timezone
+from pathlib import Path
from typing import Optional
import httpx
@@ -151,6 +152,8 @@ async def template_status(request: Request):
# needs_update if Adobe was modified after the DS template
status = "needs_update" if adobe_modified > ds_modified else "migrated"
+ blockers, warnings = _get_validation(t.get("id", ""), name)
+
results.append({
"adobe_id": t.get("id"),
"name": name,
@@ -158,10 +161,36 @@ async def template_status(request: Request):
"docusign_id": ds_match.get("templateId") if ds_match else None,
"docusign_modified": ds_match.get("lastModified") if ds_match else None,
"status": status,
+ "blockers": blockers,
+ "warnings": warnings,
})
return {"templates": results}
+def _get_validation(template_id: str, template_name: str) -> tuple[list, list]:
+ """Return (blockers, warnings) if the template has been downloaded; else ([], [])."""
+ try:
+ from src.services.mapping_service import adobe_folder_to_normalized
+ from src.services.validation_service import validate_template
+
+ downloads_dir = Path(settings.downloads_dir) if hasattr(settings, "downloads_dir") else Path("downloads")
+ # Match folder by name__id or name pattern
+ candidates = list(downloads_dir.glob(f"*__{template_id}"))
+ if not candidates:
+ # Try matching by sanitised name prefix
+ safe = template_name.replace("/", "_").replace("\\", "_")
+ candidates = list(downloads_dir.glob(f"{safe}*"))
+
+ if not candidates or not candidates[0].is_dir():
+ return [], []
+
+ normalized = adobe_folder_to_normalized(str(candidates[0]))
+ result = validate_template(normalized)
+ return result.blockers, result.warnings
+ except Exception:
+ return [], []
+
+
# asyncio needed for gather — import at top of module
import asyncio
diff --git a/web/routers/verify.py b/web/routers/verify.py
new file mode 100644
index 0000000..709a4f5
--- /dev/null
+++ b/web/routers/verify.py
@@ -0,0 +1,146 @@
+"""
+web/routers/verify.py
+---------------------
+Verification endpoints: send test envelopes, poll status, void.
+Uses DocuSign Envelopes API to confirm migrated templates work end-to-end.
+"""
+
+from typing import Optional
+
+import httpx
+from fastapi import APIRouter, Request
+from fastapi.responses import JSONResponse
+from pydantic import BaseModel
+
+from web.config import settings
+from web.session import get_session
+
+router = APIRouter()
+
+
+class SendRequest(BaseModel):
+ template_id: str
+ recipient_name: str
+ recipient_email: str
+
+
+class VoidRequest(BaseModel):
+ reason: str = "Test envelope — voided after verification"
+
+
+def _require_docusign(session: dict) -> Optional[JSONResponse]:
+ if not session.get("docusign_access_token"):
+ return JSONResponse({"error": "not authenticated to DocuSign"}, status_code=401)
+ return None
+
+
+@router.post("/send")
+async def send_test_envelope(body: SendRequest, request: Request):
+ """Send a test envelope using a migrated DocuSign template."""
+ session = get_session(request)
+ err = _require_docusign(session)
+ if err:
+ return err
+
+ headers = {
+ "Authorization": f"Bearer {session['docusign_access_token']}",
+ "Content-Type": "application/json",
+ }
+ base = f"{settings.docusign_base_url}/v2.1/accounts/{settings.docusign_account_id}"
+
+ async with httpx.AsyncClient() as client:
+ # Fetch template to discover actual role names
+ tpl_resp = await client.get(f"{base}/templates/{body.template_id}", headers=headers)
+ role_names = []
+ if tpl_resp.is_success:
+ tpl = tpl_resp.json()
+ recipients = tpl.get("recipients", {})
+ for group in recipients.values():
+ if isinstance(group, list):
+ for r in group:
+ rn = r.get("roleName")
+ if rn and rn not in role_names:
+ role_names.append(rn)
+
+ # Fall back to generic role name if template fetch failed
+ if not role_names:
+ role_names = ["Signer"]
+
+ template_roles = [
+ {"email": body.recipient_email, "name": body.recipient_name, "roleName": rn}
+ for rn in role_names
+ ]
+
+ payload = {
+ "templateId": body.template_id,
+ "status": "sent",
+ "templateRoles": template_roles,
+ "emailSubject": "[Verification Test] Please sign this document",
+ }
+
+ resp = await client.post(f"{base}/envelopes", headers=headers, json=payload)
+
+ if not resp.is_success:
+ return JSONResponse(
+ {"error": "DocuSign API error", "detail": resp.text},
+ status_code=502,
+ )
+
+ data = resp.json()
+ return {"envelope_id": data.get("envelopeId"), "roles": role_names}
+
+
+@router.get("/status/{envelope_id}")
+async def envelope_status(envelope_id: str, request: Request):
+ """Get the current status of a test envelope."""
+ session = get_session(request)
+ err = _require_docusign(session)
+ if err:
+ return err
+
+ async with httpx.AsyncClient() as client:
+ resp = await client.get(
+ f"{settings.docusign_base_url}/v2.1/accounts/{settings.docusign_account_id}/envelopes/{envelope_id}",
+ headers={"Authorization": f"Bearer {session['docusign_access_token']}"},
+ )
+
+ if not resp.is_success:
+ return JSONResponse(
+ {"error": "DocuSign API error", "detail": resp.text},
+ status_code=502,
+ )
+
+ data = resp.json()
+ return {
+ "envelope_id": envelope_id,
+ "status": data.get("status"),
+ "completed_at": data.get("completedDateTime"),
+ "sent_at": data.get("sentDateTime"),
+ }
+
+
+@router.post("/void/{envelope_id}")
+async def void_envelope(envelope_id: str, body: VoidRequest, request: Request):
+ """Void a test envelope after verification is complete."""
+ session = get_session(request)
+ err = _require_docusign(session)
+ if err:
+ return err
+
+ async with httpx.AsyncClient() as client:
+ resp = await client.put(
+ f"{settings.docusign_base_url}/v2.1/accounts/{settings.docusign_account_id}/envelopes/{envelope_id}",
+ headers={
+ "Authorization": f"Bearer {session['docusign_access_token']}",
+ "Content-Type": "application/json",
+ },
+ json={"status": "voided", "voidedReason": body.reason},
+ )
+
+ if not resp.is_success:
+ return JSONResponse(
+ {"error": "DocuSign API error", "detail": resp.text},
+ status_code=502,
+ )
+
+ return {"voided": True, "envelope_id": envelope_id}
diff --git a/web/static/app.js b/web/static/app.js
deleted file mode 100644
index a1832be..0000000
--- a/web/static/app.js
+++ /dev/null
@@ -1,343 +0,0 @@
-// Adobe Sign → DocuSign Migrator — frontend app
-// Vanilla JS, no build step.
-
-const $ = id => document.getElementById(id);
-
-let statusTemplates = []; // [{adobe_id, name, status, docusign_id, ...}]
-let dsTemplates = []; // [{id, name, lastModified}]
-let authState = { adobe: false, docusign: false };
-
-// ── Init ────────────────────────────────────────────────────────────────────
-
-document.addEventListener('DOMContentLoaded', async () => {
- await refreshAuth();
- await refreshTemplates();
- await refreshHistory();
-
- $('btn-migrate').addEventListener('click', onMigrate);
- $('btn-refresh').addEventListener('click', async () => {
- await refreshTemplates();
- await refreshHistory();
- });
-});
-
-// ── Auth ─────────────────────────────────────────────────────────────────────
-
-async function refreshAuth() {
- const resp = await fetch('/api/auth/status');
- authState = await resp.json();
- renderAuthBar();
-}
-
-function renderAuthBar() {
- // Adobe: use .env credentials (primary), OAuth dialog (secondary)
- const adobeEl = $('badge-adobe');
- adobeEl.textContent = authState.adobe ? '✓ Adobe Sign' : 'Connect Adobe Sign';
- adobeEl.className = 'auth-badge' + (authState.adobe ? ' connected' : '');
- adobeEl.onclick = authState.adobe
- ? () => disconnectPlatform('adobe')
- : () => connectAdobeEnv();
-
- // DocuSign: JWT grant from .env — no browser sign-in needed
- const dsEl = $('badge-docusign');
- dsEl.textContent = authState.docusign ? '✓ DocuSign' : 'Connect DocuSign';
- dsEl.className = 'auth-badge' + (authState.docusign ? ' connected' : '');
- dsEl.onclick = authState.docusign
- ? () => disconnectPlatform('docusign')
- : () => connectDocusign();
-}
-
-async function disconnectPlatform(platform) {
- await fetch(`/api/auth/${platform}/disconnect`);
- authState[platform] = false;
- renderAuthBar();
- await refreshTemplates();
-}
-
-async function connectAdobeEnv() {
- const el = $('badge-adobe');
- el.textContent = 'Connecting…';
- const resp = await fetch('/api/auth/adobe/connect');
- const data = await resp.json();
- if (data.connected) {
- authState.adobe = true;
- renderAuthBar();
- await refreshTemplates();
- } else {
- el.textContent = 'Connect Adobe Sign';
- // If .env has no credentials, fall back to the OAuth dialog
- if (data.error && data.error.includes('No Adobe Sign credentials')) {
- startAdobeAuth();
- } else {
- setStatus('Adobe Sign error: ' + (data.error || 'unknown'));
- }
- }
-}
-
-async function connectDocusign() {
- const dsEl = $('badge-docusign');
- dsEl.textContent = 'Connecting…';
- const resp = await fetch('/api/auth/docusign/connect');
- const data = await resp.json();
- if (data.connected) {
- authState.docusign = true;
- renderAuthBar();
- await refreshTemplates();
- } else {
- dsEl.textContent = 'Connect DocuSign';
- setStatus('DocuSign error: ' + (data.error || 'unknown'));
- }
-}
-
-// Adobe Sign uses the same manual-paste flow as the CLI:
-// 1. Open auth URL in new tab
-// 2. User authorizes → lands on failed https://localhost:8080/callback page
-// 3. User copies that URL, pastes it into the dialog here
-// 4. We POST it to /api/auth/adobe/exchange
-
-async function startAdobeAuth() {
- const resp = await fetch('/api/auth/adobe/url');
- const { url } = await resp.json();
-
- showAdobeDialog(url);
-}
-
-function showAdobeDialog(authUrl) {
- // Remove any existing dialog
- const existing = $('adobe-auth-dialog');
- if (existing) existing.remove();
-
- const dialog = document.createElement('div');
- dialog.id = 'adobe-auth-dialog';
- dialog.innerHTML = `
-
-