DNS

Network

11 sections
55 source tickets

Last synthesized: 2026-02-13 02:47 | Model: gpt-5-mini
Table of Contents

1. Domain provisioning and DNS records for hosting and vanity services

22 tickets

2. NS delegation blocked by existing CNAME records

1 tickets

3. Service outages after domain/host changes due to DNS propagation and hostname certificate mismatches

6 tickets

4. DNS records for email authentication and domain verification (DKIM, TXT, Amazon SES)

13 tickets

5. Internal hostname-to-IP inconsistencies affecting application communications

2 tickets

6. Registrar-imposed 60-day transfer lock preventing domain transfers

1 tickets

7. Administrative deletion of obsolete DNS host records

6 tickets

8. Registrar payment and account access blocking domain purchases

1 tickets

9. Automated approval timeout causing domain request auto-decline

1 tickets

10. Client NICs or virtual adapters causing persistent static DNS overrides

1 tickets

11. Local Windows Firewall blocking application-level server-to-workstation communications

1 tickets

1. Domain provisioning and DNS records for hosting and vanity services
91% confidence
Problem Pattern

Domains, subdomains, and custom/vanity hostnames failed DNS resolution, registrar delegation, vendor validation, or redirects because nameservers, delegation, or DNS records (A or CNAME) were missing, incorrect, expired, or not propagated. Symptoms included unreachable sites, browser DNS failures such as DNS_PROBE_FINISHED_NXDOMAIN / NXDOMAIN, redirect 404/403 responses, vendor verification failures, and stale DNS entries discovered by asset scans. Triggers included expired or unrenewed domains, missing registrar credentials or transfer auth‑codes, DNS zones hosted by external providers that blocked local changes, and unclear or fragmented administrative ownership.

Solution

Incidents were resolved by first identifying the authoritative registrar and DNS zone, then creating, correcting, or removing DNS records and restoring delegation as required. Actions and outcomes included: registering or renewing domains and recording registrar contacts; locating DNS zones (including zones hosted in AWS Route53) and applying updates there; adding apex and www A records to point hosts at hosting IPs (examples: synteaplus landing page, csfi A record); and creating CNAME records for vanity domains and CDN/vendor targets with fully qualified canonical names (examples: walbrook.ac.uk → libf.jotform.com; proctoring-dashboard.cama.iu.org → cname.vercel-dns.com; short.io CNAMEs in Route53). Redirect failures were traced to a combination of DNS and application redirect configuration and were resolved after both layers were corrected. Zones served by external DNS providers (for example Salesforce‑hosted zones) were escalated to the owning teams when local changes were not possible. Asset discovery (EASM) flagged stale entries that routed unrelated domains to third‑party systems (examples: hrteam.de and iubh.at pointing to a Starface instance); those DNS records were removed to stop unintended routing. Expiry, transfer, and ownership cases were handled by locating registrars and credentials (including DENIC transit/auth‑code work) and liaising with previous managers or registrars to restore nameserver control; some cases noted hosting/registrar vendors (Dogado/Proximity) refusing to disclose contract contacts when requesters were not listed. DNS record visibility and vendor validation were confirmed with lookup tools (MxToolbox, nslookup) after changes. After delegation and records were corrected and had time to propagate, vendor portals reported domains as active, redirects served expected responses, and hosted sites became reachable.

2. NS delegation blocked by existing CNAME records
95% confidence
Problem Pattern

Attempts to delegate subdomains via NS records failed because existing CNAME records existed for the same names; affected hosts continued resolving to legacy/staging targets instead of the new nameservers.

Solution

The conflicting CNAME records that pointed names to the Dogado staging hosts were removed and the zone was updated with NS delegations to the target DNS provider (AWS Route 53 nameservers). After the CNAMEs were cleared and NS records applied, the delegated names began resolving to the new infrastructure (NLB) as intended.

Source Tickets (1)
3. Service outages after domain/host changes due to DNS propagation and hostname certificate mismatches
90% confidence
Problem Pattern

After DNS record updates, nameserver outages, or local resolver failures, users experienced DNS lookup failures and external service unavailability. Symptom patterns included browser “site cannot be reached” or “you are not connected” errors, nslookup returning “DNS request timed out” or “Server: Unknown” (sometimes showing a link‑local IPv6 address as the resolver), intermittent internal links/CSS failures, and browser “connection is not private” SSL hostname mismatch messages. Incidents occurred across networks and browsers and affected public websites and third‑party services.

Solution

Incidents were attributed to four primary root causes: (1) vendor authoritative nameserver outages, (2) incomplete DNS propagation following domain/host changes, (3) SSL certificate hostname coverage mismatches (for example, certificate covering the apex but not the www subdomain), and (4) local DNS resolver or DNS server unreachability/misconfiguration causing DNS request timeouts. Vendor-side nameserver outages (example: datadoghq.eu) produced short external interruptions that vendors identified and resolved, restoring access within minutes. In multiple domain/host change events, services resumed once updated DNS records propagated globally (propagation observed up to 24–48 hours) and client caches refreshed; in one case clearing a browser cache allowed the homepage to load while internal links and CSS remained unavailable until propagation and cache refresh completed. SSL hostname mismatches produced browser “connection is not private” errors until DNS and certificate coverage aligned for the accessed hostname. Diagnostics that exposed local resolver failures included nslookup timeouts and “Server: Unknown” responses where the client showed a link‑local IPv6 address as the configured resolver; those cases indicated an unreachable or misconfigured local DNS server and required network/DNS administrator or vendor engagement. One ticket contained no documented IT remediation; diagnostics nonetheless pointed to resolver timeouts as the likely root cause.

4. DNS records for email authentication and domain verification (DKIM, TXT, Amazon SES)
77% confidence
Problem Pattern

Missing, incorrect, or conflicting DNS authentication and ownership records (DKIM, SPF, DMARC, TXT, CNAME, MX) caused vendors and services to report domains as “not verified” or “missing DKIM”. Symptoms included failed or blocked automated sends (including custom‑from domains), vendor UIs preventing message composition (write‑protected), broken link tracking when CNAME tracking subdomains were absent or pointed elsewhere, SSO/SCIM/provider verification failures, and externally generated messages (for example calendar invites) producing errors or incorrect provider links after send.

Solution

Provider-specified DNS records were published or corrected per each vendor’s requirements and confirmed visible via DNS queries; services moved from “not verified” to verified after DNS propagation. Specific actions included: publishing public TXT verification tokens for site ownership and SSO/SCIM providers (Google, Canva, Cursor, Mentimeter, Atlassian/Okta and similar) in iu.org and delegated zones; deploying DKIM records exactly as vendors required (creating CNAME‑based DKIM selectors when vendors supplied CNAME targets and publishing vendor-supplied DKIM TXT values when required); and adding requested SPF, DMARC and MX TXT/MX records. For Qualtrics integrations, required records for a Custom From‑Domain were documented and, in cases where the organization chose to permit third‑party sending, public DKIM TXT values (from Qualtrics), a public DMARC TXT and an MX record were published and an AWS SES account (account name: iuresearch) was provisioned to authorize sending. One Qualtrics case documented that the vendor required a Custom Domain and specific DNS records but the change was not performed by IT (ticket closed as “Won’t Do”) and responsibility remained with Qualtrics. For Mailgun/Cision link tracking, a CNAME‑based tracking subdomain plus corresponding SPF/DKIM entries were created; verification and link tracking failures were traced to an inability to publish the requested CNAME because the subdomain was reserved and were resolved after correcting the CNAME target. A DMARC policy‑change request for iu.org (example before: "v=DMARC1; p=quarantine; rua=mailto:7ic59aw1fp@rua.powerdmarc.com; ruf=mailto:7ic59aw1fp@ruf.powerdmarc.com; pct=100; fo=1; adkim=r; aspf=r"; requested/after: "v=DMARC1; p=reject; rua=mailto:7ic59aw1fp@rua.powerdmarc.com; ruf=mailto:7ic59aw1fp@ruf.powerdmarc.com; pct=100; fo=1; adkim=r; aspf=r") was captured and handled via DNS TXT updates. Name conflicts, reserved subdomains, and organizational policy occasionally prevented publishing requested records; third‑party sending requests were evaluated against policy and declined where appropriate. One attempted SMTP relay failed because the vendor platform overwrote the configured sender address with the SMTP username, which was noted during evaluation.

5. Internal hostname-to-IP inconsistencies affecting application communications
70% confidence
Problem Pattern

DNS name resolution returned incorrect, stale, or missing IP addresses for internal hostnames, causing hostname-based application communications and access to network file shares to fail. Affected systems included application servers and VMs; symptoms included inability to reach services by hostname, network-share access failures, and occasional prolonged VM boot/restart hangs. Issues were observed in internal Windows DNS environments and internal domains.

Solution

Incorrect hostname-to-IP mappings in internal DNS were corrected so hostnames resolved to the actual machine IPs; DNS entries were aligned with the network topology and related firewall/port considerations for the e-test application were reviewed. In at least one incident affecting a VM that experienced a prolonged (~40 minute) boot/restart hang, IT support applied a DNS-side fix and restarted the VM; after the VM recovered the shared network directory became reachable again. Tickets did not include low-level configuration details of the DNS remediation, only that DNS corrections plus bringing the affected VM back online restored hostname-based communications and network-share access.

Source Tickets (2)
6. Registrar-imposed 60-day transfer lock preventing domain transfers
92% confidence
Problem Pattern

Domain transfers could not be initiated because the registrar enforced a 60-day transfer lock after domain purchase; attempts to start a transfer returned no explicit error codes and transfers remained blocked until the lock expiry date. Affected systems included the source registrar and the target registrar and prevented normal transfer workflows for the domain.

Solution

The issue was resolved by waiting for the registrar's mandatory 60-day transfer lock to expire. After the lock period ended (transfer became possible on 2024-07-26), the transfer was initiated to Cloudpit (Dogado) on 2024-07-29. The registrar sent the Auth-Code via email, the code was provided to the completing party, and the transfer finished on 2024-07-30. Post-transfer follow-up was planned to configure domain forwarding/redirects.

Source Tickets (1)
7. Administrative deletion of obsolete DNS host records
90% confidence
Problem Pattern

Administrative requests to remove obsolete or unused DNS host (A/CNAME) or service (NAPTR) records, or to unbind domains from DNS zones and services. Symptoms included stale domain entries in DNS listings or monitoring exports, unintended redirects, or no explicit error messages or active-service failures. Deletions were occasionally postponed or reversed (records re-enabled) because external services or internal migrations still depended on the entries.

Solution

Obsolete DNS entries and domain bindings were removed from the relevant DNS management interfaces and associated systems, or—when required—were re-enabled and removals postponed after coordination with the service owner. Examples of completed removals included deleting a host from the cama.iu.org zone and removing an expired host pointing to a Mattermost instance from Route53, which stopped an unintended redirect seen in monitoring exports. Eduroam-related domains and their NAPTR records were unbound and cleaned up across the Eduroam website, RADIUS proxies, NPS configurations, test user accounts, and stored credentials (1Password). In one case a planned deletion for Brainyoo began in 2022 but was reversed and subsequent deletion attempts were repeatedly postponed after Brainyoo reported an ongoing internal migration and requested the records remain active; these scheduling and re-enablement actions were tracked in the ticketing system. Routine deletions (for example locating the iubh.de zone, removing gggp.iubh.de, and confirming removal) were logged in ticket updates and change comments.

8. Registrar payment and account access blocking domain purchases
90% confidence
Problem Pattern

Attempts to purchase a specific domain failed because the preferred registrar required an upfront/prepayment and the buyer lacked direct account access; purchase was blocked across usual providers (Cloudpit, AWS) until accounting completed a registrar-mandated prepayment and registrar account credentials were provided.

Solution

The domain purchase was completed at united-domains after Accounting processed the registrar-required prepayment. United-domains was identified as the registrar with the best offer, the prepayment was requested and completed, the domain iu.tech was purchased at united-domains, and login credentials for the registrar account were sent to the requesting owner (Marina). Dirk Bialojahn was noted as the contact for subsequent domain configuration.

Source Tickets (1)
9. Automated approval timeout causing domain request auto-decline
85% confidence
Problem Pattern

A request for a custom domain (GCD) stalled because the manager/cost-center approver did not approve within the automated 14-day window, leaving the domain request pending and unfulfilled.

Solution

The Guided Conversation Designer domain request was automatically declined and the Jira ticket was closed when the required approver did not respond within the 14-day approval window. No domain registration or configuration actions were performed; the ticket contained suggested next steps (obtain approval, complete any required accounting prepayment and coordinate stakeholders) but those were not executed before the automation closed the request.

Source Tickets (1)
10. Client NICs or virtual adapters causing persistent static DNS overrides
70% confidence
Problem Pattern

Windows laptops showed DNS servers locked to a static address (192.168.0.1) that caused loss of Internet on other Wi‑Fi networks; the DNS field sometimes reverted back to the static value even though netsh/ipconfig reported the adapter's DHCP state as enabled and DHCP-provided DNS as a different address. Affected systems included WLAN adapters on Windows 10 with VMware/VirtualBox present.

Solution

The issue was resolved by removing or disabling conflicting virtual network adapters and clearing the stale static DNS configuration, then resetting the TCP/IP stack. A TCP/IP/Winsock reset (netsh int ip reset; netsh winsock reset) was applied and the adapter DNS entries were returned to DHCP; after the virtual adapters were corrected the DHCP-provided DNS (100.72.0.1 in the environment) was consistently applied and Internet access returned.

Source Tickets (1)
11. Local Windows Firewall blocking application-level server-to-workstation communications
80% confidence
Problem Pattern

A local exam server (wfa-exa-01) was online and the site had Internet, but the exam server software could not communicate with the Admin workstation when attempting to set up an exam; an application-level error was shown and local network reachability appeared inconsistent.

Solution

Troubleshooting identified a local network communication block and communications were restored after Windows Firewall rules were adjusted to permit the exam server/Admin software traffic. The server reachability was verified and firewall exceptions for the exam service were applied/confirmed, after which the Admin workstation could successfully connect to wfa-exa-01 and set up exams.

Source Tickets (1)
Back to Summaries
An unhandled error has occurred. Reload X