System: FreeScout helpdesk — tickets.raxx.app
Owner: operator
Last verified: 2026-04-30 (issue #715)
Last reviewed: 2026-04-30
tickets.raxx.app uses a two-layer TLS model. There is NO certbot on the
Lightsail instance. This is correct by design.
| Layer | Certificate | Issued by | Renewed by | Expiry |
|---|---|---|---|---|
| Public (user → Cloudflare) | CN=raxx.app |
Let's Encrypt E7 | Cloudflare Universal SSL (automatic) | 90-day rolling; current: 2026-07-21 |
| Origin (Cloudflare → Lightsail) | CN=ip-172-26-11-76 (snakeoil) |
Self-signed | Never (valid until 2036) | 2036-04-26 |
Why snakeoil at the origin: Cloudflare terminates TLS for end users and
re-connects to the origin on port 443 using Full SSL mode. In Full SSL mode,
Cloudflare does not validate the origin certificate (chain or expiry) — it only
verifies that the connection is encrypted. The snakeoil cert is sufficient and
has a 10-year lifetime. The bootstrap template (terraform/freescout/templates/
user_data.sh.tpl) intentionally omits certbot.
Why Cloudflare handles the public cert: Cloudflare Universal SSL is
provisioned and renewed automatically for all proxied (orange-cloud) DNS
records. No operator action is required unless the record is un-proxied or the
zone's Universal SSL feature is disabled.
| Symptom | Likely cause |
|---|---|
Browser shows certificate error for tickets.raxx.app |
Cloudflare Universal SSL lapsed, or DNS record switched to DNS-only (grey cloud) |
curl -sI https://tickets.raxx.app/ returns 526 Invalid SSL Certificate |
Origin cert missing or Apache SSL vhost not configured |
curl -sI https://tickets.raxx.app/ returns 525 SSL Handshake Failed |
Apache not listening on port 443, or SSL module disabled |
CF-Ray header absent |
Cloudflare is not proxying — check DNS record proxy status |
Run this check monthly or after any Cloudflare / Lightsail change.
# From any machine with openssl
echo | openssl s_client \
-connect tickets.raxx.app:443 \
-servername tickets.raxx.app 2>/dev/null \
| openssl x509 -noout -subject -issuer -dates
Expected output:
subject=CN=raxx.app
issuer=C=US, O=Let's Encrypt, CN=E7
notBefore=<date>
notAfter=<date 90 days out>
If notAfter is within 30 days, see "Fix: Cloudflare Universal SSL stalled" below.
curl -sI https://tickets.raxx.app/ | grep -i "cf-ray\|server"
Expected: server: cloudflare and a cf-ray: header. If absent, the DNS
record has been set to DNS-only — re-enable the Cloudflare proxy (orange cloud)
for the tickets A record.
ssh -i /tmp/lightsail_us_east_1.pem \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
admin@54.146.13.200 \
'sudo openssl s_client -connect localhost:443 -servername tickets.raxx.app \
</dev/null 2>/dev/null | openssl x509 -noout -subject -issuer -dates'
Expected: CN=ip-172-26-11-76 (snakeoil), notAfter=Apr 26 ... 2036.
ssh -i /tmp/lightsail_us_east_1.pem \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
admin@54.146.13.200 \
'which certbot 2>/dev/null || echo "certbot not installed — expected"'
Expected: certbot not installed — expected. If certbot IS found, investigate
who installed it and whether it has modified the Apache vhosts or created
/etc/letsencrypt/ — see "Certbot installed unexpectedly" below.
Symptom:
- openssl s_client from outside shows an expired cert or a CF edge error cert
- Browser shows a security warning for tickets.raxx.app
- notAfter on the public cert is in the past or within 7 days
Cause: Cloudflare Universal SSL auto-renewal failed. This is rare (Cloudflare SLA covers Universal SSL). Common triggers: zone plan downgrade, CAA DNS record blocking Let's Encrypt, or Cloudflare temporarily unable to reach ACME servers.
Diagnose:
1. Check Cloudflare dashboard → raxx.app zone → SSL/TLS → Edge Certificates
2. Confirm "Universal SSL" certificate status is "Active"
3. Check for CAA records: dig CAA tickets.raxx.app — should return no records
or records that permit letsencrypt.org
Fix:
1. In Cloudflare dashboard → SSL/TLS → Edge Certificates → disable then
re-enable Universal SSL (triggers re-issuance)
2. If CAA records are blocking: remove the blocking CAA record or add
0 issue "letsencrypt.org" alongside the existing records
3. Wait up to 24h for Cloudflare to provision the new cert
Verification:
echo | openssl s_client -connect tickets.raxx.app:443 \
-servername tickets.raxx.app 2>/dev/null \
| openssl x509 -noout -dates
Symptom:
- curl -sI https://tickets.raxx.app/ shows no CF-Ray header
- Browser connects directly to the Lightsail IP
- Cloudflare Access gate does not appear (no SSO prompt)
- Browser may show "certificate not trusted" (snakeoil cert served directly)
Cause: Someone changed the Cloudflare DNS record for tickets.raxx.app from
proxied (orange) to DNS-only (grey cloud).
Fix:
# Via Cloudflare dashboard: DNS → records → tickets A record → toggle to proxied
# Or via Terraform:
cd terraform/freescout
terraform apply -target=cloudflare_record.tickets
Verification: curl -sI https://tickets.raxx.app/ | grep cf-ray returns a
cf-ray: value.
Symptom: Cloudflare returns 525 SSL Handshake Failed
Cause: Apache SSL module disabled, SSL vhost not enabled, or Apache stopped.
Fix:
ssh -i /tmp/lightsail_us_east_1.pem -o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null admin@54.146.13.200
# Check Apache status
sudo systemctl status apache2
# Check SSL module
sudo apache2ctl -M | grep ssl
# If ssl module missing:
sudo a2enmod ssl && sudo systemctl restart apache2
# Check SSL vhost enabled
ls /etc/apache2/sites-enabled/ | grep ssl
# If freescout-ssl.conf missing:
sudo a2ensite freescout-ssl.conf && sudo systemctl restart apache2
Symptom: which certbot returns a path; /etc/letsencrypt/ exists; Apache
SSL vhost no longer references the snakeoil cert.
Context: Certbot should NOT be on this instance. The bootstrap does not install it. If found, it was installed manually outside of IaC.
Cause: Manual operator action post-deploy.
Diagnose:
sudo certbot certificates 2>&1
cat /etc/apache2/sites-enabled/freescout-ssl.conf | grep SSLCertificate
# If pointing at /etc/letsencrypt/..., certbot has taken over origin TLS.
Impact assessment: - If the certbot cert is valid and Apache is serving HTTPS correctly to Cloudflare, the site is functional. This is not an immediate incident. - However, certbot auto-renewal via HTTP-01 challenge requires port 80 to be reachable from Let's Encrypt servers. The Lightsail firewall currently restricts port 80 to Cloudflare IP ranges only. Let's Encrypt IPs are NOT in that allowlist. Auto-renewal WILL fail silently until the cert expires.
Fix options (escalate to operator to choose): 1. Remove certbot and restore snakeoil: revert the Apache SSL vhost to use the snakeoil cert (preferred — matches IaC design) 2. Keep certbot but fix renewal: switch to DNS-01 challenge, configure Cloudflare API token in certbot, update the Lightsail firewall if needed. This is significant scope — requires IaC changes.
Recommended path: Option 1. The snakeoil cert is correct for this architecture. Cloudflare handles the public cert.
If a cert incident causes the site to be inaccessible and cannot be resolved quickly:
# Enable Cloudflare "flexible SSL" as a temporary bridge
# (Cloudflare does not require a cert at the origin in flexible mode)
# Dashboard → SSL/TLS → Overview → change encryption mode to "Flexible"
# WARNING: data between CF and origin is then unencrypted — temporary only
# Revert to "Full" as soon as origin TLS is restored
Wake the operator when: - Public cert is expired and CF Universal SSL re-issuance has been tried and failed after 24h - Origin Apache is unresponsive to port 443 connections and Apache restart does not restore it - Certbot was found installed and has caused a renewal loop or broken Apache config
Until automated monitoring is wired up, run the routine verification procedure above monthly. Target: add Cloudflare cert-expiry monitoring via the API (action item from issue #715).
docs/ops/runbooks/freescout.mdterraform/freescout/terraform/freescout/templates/user_data.sh.tpl