Telephony

Cloud

42 sections
762 source tickets

Last synthesized: 2026-02-12 23:31 | Model: gpt-5-mini
Table of Contents

1. Cloudya/Nfon account, extension and phone-number provisioning failures

150 tickets

2. Shared external numbers and Cloudya call-queue / ring-group setup limitations

22 tickets

3. Vonage account roles, call-group membership and dialer/browser integration failures

241 tickets

4. Provisioning corporate SIMs for WhatsApp Business and PC-based SMS access

44 tickets

5. Temporary mobile data increase to 'unlimited' via Telekom self-service portal

6 tickets

6. Cloudya desktop window positioned off-screen on multi-monitor setups

6 tickets

7. Twilio EKG showing orange warning triangle and WhatsApp messages stuck loading

82 tickets

8. iPhone not receiving incoming calls due to unconditional call forwarding

7 tickets

9. Provisioning Cloudya accounts and assigning geographic DIDs for external users

26 tickets

10. No audio on Cloudya desktop/web calls while local audio devices test OK

8 tickets

11. Unexpected outbound calls from Twilio triggered by scheduled tasks on CRM Opportunities

11 tickets

12. Twilio login/console failing with JavaScript TypeError on Worker records

6 tickets

13. Avaya Communicator prompted for configuration and prevented agent registration

4 tickets

14. Teams blocked international outbound calls due to missing PSTN/Calling-Plan license

2 tickets

15. Outbound Cloudya/NFON numbers flagged as 'Fraud/Spam' on recipient Android devices

2 tickets

16. Inbound callflow routing caused by outdated service/opening hours

4 tickets

17. Avaya reporting does not retain or expose per-direct-number call-volume data

1 tickets

18. Twilio blocking outbound WhatsApp when WAPP consent record appears stale

11 tickets

19. Inbound call qualification failure caused by recent code regression

1 tickets

20. One-off PBX call forwarding activation for a specific DID

6 tickets

21. Twilio outbound call failure 45301 caused by incorrect or misformatted destination number

5 tickets

22. Twilio Caller ID registration blocked when service mobile cannot receive verification calls

45 tickets

23. Outbound calls not processed when Power_Outbound enabled before its scheduled start

6 tickets

24. Vonage WebRTC add-on missing from webshop after platform update

1 tickets

25. Twilio POB inbound call preview behavior varies by opportunity stage

3 tickets

26. Provisioning of specific Twilio phone-number requests

8 tickets

27. Unexpected callbacks traced to worker phone-number mapping and routing logic

15 tickets

28. Perceived Power Outbound outage caused by low task/applicant volume, not platform failure

9 tickets

29. Cloudya (CARE) web app full unavailability due to provider-side outage

4 tickets

30. Cloudya upgrade request blocked by lack of administrative credentials

1 tickets

31. Salesforce telephony feature unavailable due to missing permissions/approval

1 tickets

32. Twilio initial-login frontend TypeError from null addEventListener during first-time setup

1 tickets

33. Twilio sign-in failure due to missing password-reset email — Omnichannel-handled account recovery

2 tickets

34. Verifying Vonage license for call-tracking / virtual-number campaign attribution

1 tickets

35. Vonage account provisioning and Salesforce record linkage for new users

7 tickets

36. Calls terminating ~2 seconds after answering for a single external contact

1 tickets

37. Intermittent DTMF tones not recognised by IVR for specific incoming numbers

1 tickets

38. POB/Oppy session stuck in a phantom 'dummy call' causing no dial tone

1 tickets

39. Twilio blocking outbound international calls due to destination number risk classification

5 tickets

40. Twilio POB creating duplicate outbound tasks and repeatedly calling same Salesforce opportunities

3 tickets

41. Scheduled callback (Rückruftermin) recorded but not executed in Twilio

1 tickets

42. Twilio Power Outbound slow call setup and repeated follow-up (FUP) assignment

1 tickets

1. Cloudya/Nfon account, extension and phone-number provisioning failures
95% confidence
Problem Pattern

Cloudya/NFON users experienced telephony and sign‑in failures: sign‑in loops, 'Passwort ungültig' or repeated password prompts, very long or stuck sign‑in loads, or expired/looping activation/forgot‑password flows. Telephony symptoms included immediate busy/fast busy on outbound calls, no ringback, 'Nummer nicht vergeben', and missing inbound ringing/events in soft clients. A frequent root cause was accounts lacking an assigned telephone number (numbers were auto‑deleted after prolonged inactivity, commonly ~three months); other causes included deleted/recycled external DIDs, missing internal extensions, and provider routing or outage issues.

Solution

Access and calling were restored by reprovisioning or re‑linking Cloudya/NFON identities and reprovisioning user telephony identities, internal extensions, and external DIDs. In multiple cases login failures were traced to accounts with no assigned telephone number (numbers had been auto‑deleted after prolonged inactivity, commonly ~three months); assigning or restoring the previous IU/fixed‑line number to the user restored Cloudya login. Accounts that failed activation or triggered expired/looping forgot‑password flows were recreated or had passwords reset and fresh activation/forgot‑password links issued; activation and password emails were frequently located in Outlook Junk/Spam or delivered to obsolete addresses. Deleted or recycled numbers were reassigned or replaced when possible; numbers restored on the provider side reappeared in Cloudya and generated notification emails. Site‑bound external numbers were corrected by assigning the proper physical site. Missing endpoints and apps were reprovisioned and the Cloudya app was assigned in Azure AD/Intune/Company Portal (a centralized Company Portal deployment of Cloudya v1.7.0 replaced ZIP installers and removed persistent update prompts). Desktop and web clients that showed only Settings, a white screen, or a missing dialer were restored by signing out/in or by recreating the user profile; assigning a missing internal extension restored full desktop UI in several reports. Queue and function‑key discrepancies were resolved by adding users to the correct queue and applying a reference user’s function‑key/profile (changes typically propagated within minutes, occasional 5–10 minute delays were observed). Fax devices were restored by binding the device identifier and PIN in Cloudya Settings → Fax. Voicemail/mailbox PINs were set using internal extensions or temporary mailbox passwords and activation details were sometimes delivered via internal portals. Inventory and provider discrepancies were resolved by auditing NFON forwarding lists against the Smartphone Inventory and provider records and disabling provider‑side forwardings that remained active after inventory deactivation; NFON display names and ownership records were updated where they did not match. Device investigations verified NFON numbers and analog‑to‑network adapters (for example Cisco SPA112) and checked adapter online status; where adapters reported online but calls still failed, alternative solutions (for example GSM emergency‑call devices) were proposed. Mobile call‑forwarding limitations (forwarding to a single number) were noted. Key‑user and monitoring access issues were fixed by adding staff to the Cloudya Keyuser group. Provider incidents (including NFON/Cloudya outages and B2B/Twilio inbound routing interruptions) were tracked as vendor outages; NFON/Cloudya applied configuration and routing fixes and confirmed inbound reachability with test calls. In one persistent case a soft client continued to miss inbound events after reinstall and was resolved operationally by finalizing migration to Twilio as the long‑term remedy.

Source Tickets (150)
2. Shared external numbers and Cloudya call-queue / ring-group setup limitations
91% confidence
Problem Pattern

Incoming calls to shared or site-central numbers (call queues, hotlines, ring-groups) sometimes failed to produce audible or visual ringing on intended Cloudya endpoints while appearing in users' call histories as “missed.” Queue and ring-group delivery was frequently limited to a user’s configured primary device; queue-login softkeys were sometimes absent or not propagated, preventing expected queue join/leave. Hotline provisioning and delivery failures occurred when sites lacked assigned DID/number blocks or when public listing/forwarding of central numbers or misconfigured Twilio/DS inbound call flows/skills routed calls to unintended recipients. Additionally, user account metadata errors (incorrect email addresses or misspelled names) caused missed callbacks or notification failures.

Solution

Affected staff were added as members of the relevant site-central hotlines and call queues, and queue-login function keys (softkeys) were created or copied from a working profile and assigned or removed from user profiles as appropriate; the new softkeys became visible on endpoints after users logged out and back in. Investigation determined Cloudya call queues and ring-groups delivered calls only to the user’s configured primary device; changing a user’s primary device to the intended endpoint (mobile or desktop) restored audible and visual ringing on that client. Where callers needed voicemail if no agent answered, a shared mailbox was added as the queue’s final destination so callers could leave messages. User account metadata errors (incorrect email addresses and misspelled surnames) were corrected and restored callback visibility and notification delivery. Hotline provisioning required an existing site-level DID/number block; missing number blocks were allocated and provider/contractual steps completed before hotlines were created. Outgoing calls continued to display the account’s assigned extension/number-block; changes to displayed outbound numbers required adjusting that assignment. Misrouted campus calls were often traced to public listing or forwarding of central service numbers; in a separate class of incidents, incomplete or misconfigured inbound Twilio/DS call-flow or skill logic caused hotlines to be delivered to unintended recipients and were mitigated by disabling the Inbound skill until the DS flow was completed. Provisioning of third-party trunks/accounts (e.g., Vonage) failed when required reference-user details were not provided; supplying the required reference user or using an alternative provider (Twilio was recommended in some cases) was necessary to complete provisioning. Where users could not install the Cloudya client because they lacked local administrator rights on Windows devices, the Cloudya app was assigned to the device via Microsoft Intune (Company Portal) so the app could be installed without local admin access. For requests to let remote or home-office staff answer emergency hotlines, consistent resolutions included adding remote staff as queue members or adjusting queue/forwarding routing so remote endpoints could receive calls. A pilot for multiple DS locations was proposed for broader hotline coverage. Investigation of webinar-delivery requirements confirmed Zoom webinars required a separate webinar license and no free webinar licenses were available.

3. Vonage account roles, call-group membership and dialer/browser integration failures
95% confidence
Problem Pattern

Browser/WebRTC telephony clients (Vonage, Twilio/Flex/TaskRouter, Dialpad) exhibited missing or greyed call controls, absent or delayed incoming‑call notifications, unresponsive Accept/Hang‑up/Transfer controls, and transient UI errors such as 'Call could not be accepted' or 'Connection unavailable'. Telephony SDKs reported signalling/auth errors (connectionerror-31005, transporterror-31009, accesstokeninvalid/20101), duplicate or missing call identifiers, inconsistent ringback/early‑media or post‑dial delays, and missing caller‑ID entries that blocked outbound dialing. Twilio Flex also showed channel‑switch failures from text/chat to outbound voice that failed silently despite worker_attributes indicating voice_outbound; TaskSid/TaskRouter evidence was present. Users reported one‑way/low/no audio, immediate call drops when the browser did not detect input, and intermittent headset/Bluetooth/device recognition failures.

Solution

Incidents were resolved through a combination of account/permission fixes, CTI/CRM mapping corrections, client-state resets, vendor patches, and device-level remediation. Provisioning fixes: missing or unlinked Vonage/Twilio accounts were created or reactivated, invitation/password flows completed, expired credentials reset, accounts unlocked, and correct Vonage/Twilio roles and reference-user mappings applied and recorded in Salesforce. CTI/call‑center fixes: integrations were reinitialised or toggled, stale Salesforce identifiers and mappings were corrected, and legacy or duplicate profiles removed so agents could join queues and regain Phone Button and call‑logging. Browser and client‑state faults: server‑side session terminations, user sign‑out/sign‑in, browser/page reloads, in‑page “Clear Issues”, or reinstall/reinitialise of the Vonage WebRTC softphone and extensions cleared many transient UI errors; legacy extension remnants were removed where present. Network, signalling and provider token faults: voice clients/services were restarted, valid access tokens reissued, and vendor SDK/telephony patches applied (including Twilio caller‑ID and signalling fixes); when UI controls or caller‑ID entries remained missing, call identifiers (TaskSid/CallGUID) were captured and escalated to vendor support. Twilio Flex/TaskRouter observations: several channel‑switch failures from chat to outbound voice were investigated by capturing TaskSid and worker_attributes for escalation; a number of those tickets were closed with no documented technical remediation. Audio and device faults: microphone permissions, incorrect device selection, OS/browser audio drivers, or defective headsets/USB dongles were common causes — microphone access and correct I/O selection were restored, Chrome/Edge and audio drivers updated, audio devices reinitialised and faulty headsets/dongles replaced. Bluetooth interoperability problems often fell back to wired operation until vendor driver/stack updates were applied. A metallic hang‑up noise was removed after a vendor-provided Vonage client configuration change and client/headset restart. Windows 11/Dell symptoms (screen/monitor turning off during calls, mute‑button flicker) were linked to power‑management and driver interactions and were resolved after driver or vendor tool updates, power‑profile adjustments, or USB‑dongle replacement. SIP trunk signalling variability and reporting misclassifications were escalated to carriers or product teams and resolved by adjusting trunk settings or correcting subscription/forwarding targets. Agent‑status change failures, voicemail recording failures, and contaminated internal contact lists were escalated to vendor support; some of those investigations had no recorded remediation and were closed without technical resolution.

Source Tickets (241)
4. Provisioning corporate SIMs for WhatsApp Business and PC-based SMS access
95% confidence
Problem Pattern

Corporate users experienced failed or missing SIM/eSIM provisioning or phone-number assignment that prevented WhatsApp Business registration, SMS-based PC apps, or normal mobile service. Affected devices reported No Service, loss of reception, dropped calls after a brief beep, or eSIM activation failures with messages such as “QR code recognized - no usable data found”, expired/invalid/already-used activation codes, or QR code no longer valid. Root causes included inventory or contract-record mismatches, carrier refusals/porting failures, deactivated or reassigned numbers, and pending or incomplete carrier transfer paperwork (for example carriers requiring a completed contract-transfer form and the provider framework contract number/Rahmennummer).

Solution

Support verified Inventory360 and contract/line records and used the vendor-order Automation for Jira or direct distributor (Conbato) requests to order, reissue, replace, or reassign physical SIMs and eSIM profiles; order confirmations were sent to the contract owner and Mobilfunkbestellungen@iu.org. Where eSIMs were inactive or unusable, carriers issued new activation profiles/QR codes (sometimes emailed) and technicians either obtained replacement eSIMs from the distributor or restored service by removing and re-adding the eSIM profile on-device; both approaches resolved multiple incidents. When internal automation blocked processing, requesters were routed to create a New mobile device / mobile-order ticket and select 'Only mobile' so the distributor workflow could run. Service loss was traced to incorrect Inventory360 or contract records (for example lines marked auf Lager or missing from Inventory360); service was restored after correcting Inventory360/contract status and having Conbato reactivate or reassign the number. If a number had already been returned to the carrier and could not be recovered, a new number and SIM/eSIM were issued. On iOS devices that lacked the Add Cellular Plan option because a line had no active contract, the issue was resolved by re-establishing the contract or providing a physical SIM. Unused physical SIMs were decommissioned by requesting the distributor move the number to a blank SIM or release it, physically destroying the SIM card, and confirming release with the provider. Non-technical constraints were recorded: IU policy and HR/legal reviews sometimes prevented transferring corporate mobile numbers to departing employees, numbers were routinely blocked/unavailable for a period after offboarding, and porting private numbers into an IU mobile contract was sometimes refused by the mobile provider. Carrier-specific transfer requirements caused additional delays; for example Telekom required a completed contract-transfer form including the provider framework contract number (Rahmennummer), and missing Rahmennummer or incomplete paperwork delayed or prevented number/contract transfers. When numbers changed, related external services (for example Twilio caller‑ID) were updated or handed off to the responsible service owner for caller‑ID verification and configuration. Additional observed resolution: some work phones already had an eSIM that could be moved between devices; confirming the eSIM existed on the company handset and transferring the eSIM profile restored the work line without issuing a new profile.

5. Temporary mobile data increase to 'unlimited' via Telekom self-service portal
90% confidence
Problem Pattern

Users requested increases to company mobile data allowances — temporary one‑off/top‑up, temporary 'unlimited' for the current billing month, or permanent monthly increases — frequently because of travel or running out of data near month end. Reported symptoms included depleted data balances, Magenta‑App or eSIM records not reflecting requested changes, and user concerns that repeated short‑term 'day pass' purchases might be cost‑inefficient compared with contract changes. Affected systems included Deutsche Telekom tariffs accessible via pass.telekom.de, the Magenta app/eSIM records, company devices on the Telekom LTE network, and carrier/contract provisioning channels. Some support tickets were reported to have been automatically closed by the support automation before a definitive resolution was recorded.

Solution

Temporary increases for the current billing month were completed via Telekom's self‑service portal (pass.telekom.de). These changes were performed from the company phone while on the Telekom LTE network so the portal detected the correct tariff; via the portal users booked additional data volume or applied a temporary 'unlimited' setting for the billing month. For accounting and provisioning, approvals/orders were sent to the external provider Conbato and to internal distribution addresses (Mobilfunkbestellungen@iu.org and cpg‑requests). Automation for Jira produced order emails that contained requester and contract‑holder details, service type, requested delivery date (when provided) and instructions to send confirmations to the contract holder. No delivery address was required for tariff‑only changes, and tickets were marked done after carrier confirmation of the order. Permanent increases to the standard monthly allowance were handled by IT on the user's behalf: IT requested any required manager approval, submitted the contract change to the carrier/contract team, and the carrier processed and activated the new package; users were CC'd on the carrier confirmation e‑mail. Cases sometimes presented as unchanged quotas in the Magenta‑App/eSIM until carrier provisioning completed; IT verified carrier records and informed the user after activation. As a short‑term workaround, purchasing day passes via the Magenta app was suggested; when travel was frequent and repeated day passes proved more expensive, users sometimes opted for a permanent contract increase instead. Occasionally tickets were automatically closed by Automation for Jira before a definitive resolution was recorded; in those instances staff reopened or recreated the necessary order/confirmation workflow and completed the carrier order before closing the ticket on carrier confirmation.

6. Cloudya desktop window positioned off-screen on multi-monitor setups
90% confidence
Problem Pattern

On Windows, the Cloudya desktop application sometimes failed to open, remained hidden/off-screen, or showed only a tiny taskbar preview thumbnail while the main window and/or dialer were missing or unresponsive. Users reported additional unresponsive dialogs or a missing call window after installation, credential changes, display reconfiguration, or intermittent internet connectivity. No specific error codes were reported; affected systems included the Cloudya desktop client and associated telephony/call-forwarding functionality.

Solution

Cloudya desktop windows that were positioned on a secondary or disconnected monitor were restored by moving them back to the primary display (for example via the taskbar preview 'Maximize' action) or by reconnecting the external monitor. Users who experienced a missing or unresponsive dialer regained calling functionality after signing out of the desktop client and signing back in; unresponsive dialogs sometimes persisted until the app view was refreshed, scrolled, fully exited and restarted, or the program was reinstalled. In incidents where the desktop client was missing or unresponsive after installation or a password reset, the client was reinstalled and credentials were reset. The Cloudya web application was used as a temporary fallback while Company Portal or connectivity state was resolved. As an alternative telephony mitigation when the desktop app would not open, support configured temporary call forwarding to another user/phone (for example to a colleague or mobile number) and later restored the user's preferred mobile number as the call target; the user subsequently confirmed Cloudya access was restored. Intermittent internet connectivity and display reconfiguration were noted as likely contributing factors in several cases.

7. Twilio EKG showing orange warning triangle and WhatsApp messages stuck loading
95% confidence
Problem Pattern

Intermittent Twilio/Flex/Conversations degradations producing EKG orange-triangle warnings, conversations or WhatsApp messages stuck loading or showing empty composers, channel-switch hangs, and TaskRouter actions repeatedly stuck (e.g., AcceptTask pending, StartOutboundCall blocked). Telephony symptoms included prolonged call-setup latency, failed or dropped calls, one-way audio, and Twilsock/Sync/CDS/Voice SDK errors (for example 45511, 31000/31005/31009, 20101/20104). Agents were sometimes forced offline with explicit 'Your session is expired' errors, and incidents also produced reservation/state mismatches, phantom/duplicated tasks, caller-ID or iFrame mismatches, and occasional Group Room 'Permission denied' despite successful TURN connectivity.

Solution

Incidents were triaged into Conversations/WhatsApp UI problems and telephony/TaskRouter anomalies; resolution actions spanned client, routing/database, network, and engineering teams. Key observed fixes and actions included:

• Client/browser remediation: disabling third-party browser plugins, hard reloading (Ctrl+Shift+R), clearing caches/cookies, and restarting clients recovered many web-client failures, cleared EKG warnings, and restored missing composers/iFrames. Collected full client console/network logs and HARs to speed analysis.

• TaskRouter and message state repairs: removed duplicate/noisy WhatsApp records, phantom or repeatedly recreated PoB/WA tasks, silent participants, hung reservations, and duplicated task/reservation entries; repairing reservation and task state cleared stuck AcceptTask/StartOutboundCall states and infinite task loops.

• Routing, account, and mapping corrections: consolidated duplicate Twilio accounts and corrected number→site and caller-ID→CRM mappings; applied Twilio database fixes that restored WhatsApp outbound sends, inbound routing, and corrected caller-ID and iFrame associations.

• Channel/server engineering fixes: applied targeted patches to WhatsApp/Flex/server components to resolve blocked sends, channel-switch hangs, chats that could not be closed, message ingestion of empty bodies, and UI regressions; temporarily disabling problematic channels suppressed aggressive UI blinking when required.

• Session/authentication and Twilio service restorations: corrected tokens and worker state, cleared targeted caches/cookies, and escalated to Twilio to clear Twilsock/Sync/CDS and Voice SDK errors. Browser logs showing HTTP 400 during call setup or reservation wrap-up were captured to correlate client behavior with TaskRouter and service state changes.

• Group Room and TURN: diagnosed cases where TURN connectivity succeeded but Group Room returned 'Permission denied'; Twilio/engineering permission and configuration fixes on the service side cleared those failures.

• Network, bandwidth and time‑of‑day remediation: where call‑setup latency, one‑way audio, and elevated failure rates correlated with peak usage, local/ISP congestion and RTC degradation were identified. Restorations followed router/consumer network restarts, ISP remediation/upgrades, and targeted QoS changes, which reduced peak‑time latency and call drops.

• Audio and peripheral fixes: issues where the browser showed no microphone despite other apps working were traced to incorrect browser device selection, headset reconnections, or vendor driver/firmware problems (for example Dell Command/Peripheral Manager). Reconnecting devices, selecting the correct browser audio device, and applying firmware/driver updates restored audio in affected cases.

• FS escalation for forced-offline: specialist analysis confirmed a bug in the FS that forcibly set some agents offline with 'Your session is expired' messages; the issue was escalated to the specialist team and a service-side fix was pending at the time of the ticket. Affected users were tracked and notified when the fix was deployed.

Collecting TaskSIDs, reservation IDs, HARs and full client console/network logs materially accelerated root-cause identification. Recurring incidents were closed after combining reservation/task cleanups, routing/database corrections, engineering channel/SDK patches, client hard reloads/cache clears (and disabling offending plugins/channels as short-term workarounds), network/bandwidth remediation, and Twilio-side fixes to tokens, workers, permissions, or database state.

8. iPhone not receiving incoming calls due to unconditional call forwarding
95% confidence
Problem Pattern

Incoming calls to mobile handsets (single‑ and dual‑SIM, including iPhones with eSIM) failed to ring or appear on the device while outgoing calls and mobile data worked; callers were immediately diverted to voicemail, an automatic attendant/organisation greeting, or other numbers, or calls were not delivered. Symptoms occurred immediately or after a few rings and could persist across device resets. Affected components included carrier forwarding/diversion settings, handset or SIM/line configurations, and conflicting physical SIM/eSIM combinations after provider changes.

Solution

Incoming call delivery was restored after clearing unconditional/capable call forwarding/diversion entries or removing conflicting SIMs. In incidents where carrier‑level forwarding persisted across device resets, carrier unregister/GSM MMI codes were applied (examples recorded: ##21# to cancel unconditional forwarding; ##002# to cancel all forwarding/mailbox). In other cases forwarding or automatic-answer had been set on a handset or on the originating line (including forwarding to a colleague or to an automatic attendant) and was removed on that device or line. On dual‑SIM phones the unregister/clear action and any forwarding checks were applied per affected SIM/line. In one instance calls were blocked because an old physical Vodafone SIM remained in the device after switching to a Telekom eSIM; removing and disposing of the old physical SIM restored incoming calls. Support also used an internal test number (3311) to verify call routing from the affected mobile while troubleshooting. Conditional forwarding entries were preserved where they were intentionally required for later reconfiguration.

9. Provisioning Cloudya accounts and assigning geographic DIDs for external users
92% confidence
Problem Pattern

Requests to provision NFON/Cloudya accounts, assign internal extensions or geographic DIDs (including publishable virtual numbers for business cards) and repurpose or decommission legacy corporate landlines were submitted during onboarding, procurement, printing, or office changes. Reported symptoms included lack of Cloudya access, missing or unknown telephone numbers, inability to select the expected outbound caller ID in Contact Pad/Phone Pad, requests for virtual DIDs that integrate with MS Teams/Zoom, and requirements to forward calls to private numbers only during working hours. Tickets also reported uncertainty whether legacy landlines remained operational or publishable externally and scheduled inbound-number migrations to other platforms (for example Twilio). Affected systems included NFON/Cloudya, Contact Pad/Phone Pad, Twilio, Intracall/NCTI, MS Teams, Zoom and Workday contact records.

Solution

Technicians provisioned NFON/Cloudya accounts and assigned internal extensions and geographic or virtual DIDs consistent with each organisation’s numbering plan after confirming site or requested area code. When local DIDs were unavailable they ordered geographic number blocks from NFON, submitted required NFON documentation, and applied provisioned numbers to Cloudya accounts. For requests to publish a number on business cards or in the IU Store form technicians provided a dedicated corporate/virtual DID and updated the corresponding Workday contact and printing records so private numbers were not used on Visitenkarten. Where MS Teams or Zoom presentation was required technicians verified the actual telephony platform in use and either provisioned the DID on that platform or coordinated with platform owners to enable external‑number presentation; unnecessary Cloudya provisioning was removed to avoid duplicate accounts and billing. Usernames were set to users’ email addresses when appropriate and shared or generic/team usernames were created on request; cost‑center or charge codes were applied when required. Account access was enabled via the standard activation workflow and password set/reset emails; after activation assigned numbers were visible in the Cloudya web app and PINs/numbers were delivered via IU Safe secure email. The Cloudya/Claudya client package was assigned to the Company Portal for softphone installs and eSIM/SIM provisioning or handset procurement was coordinated for mobile use. Outbound caller‑ID and Contact Pad/Phone Pad anomalies were resolved by verifying users’ configured outbound numbers, assigning the correct outbound number, and removing outdated entries so only appropriate caller‑ID options remained. Departmental or role numbers were configured as call queues with routing to primary users and overflow to departmental or backup numbers after the configured timeout. For users requiring forwarding to private numbers only during working hours technicians configured time‑based routing/forwarding where supported and clarified voicemail behavior: voicemail was recorded on the provisioned corporate/Cloudya service when voicemail was configured as the post‑forward fallback, and routing was adjusted so unanswered forwarded calls hit Cloudya voicemail (or the selected failover) after the configured ring/timeouts. For legacy corporate landline enquiries technicians confirmed whether the legacy number remained operational and suitable for external publication, then repurposed the legacy number or provisioned a new corporate DID as requested. For decommissioning of legacy telephony services (for example Intracall and associated NCTI servers) technicians maintained service availability until scheduled inbound‑number migrations to the replacement platform were completed, applied required security updates and transferred address ranges after the migration, and confirmed there were no active users before shutting down VMs and decommissioning the service. Before provisioning technicians checked for existing Cloudya accounts/extensions to avoid duplicates and requested a reference user when required. For physical office telephony requests they clarified whether desk phones, mobile handsets, or a mixed solution was required, confirmed site/room and quantity for mobile handsets, coordinated procurement and SIM/eSIM activation, and enabled voicemail/answering‑machine services in Cloudya or configured external answering‑machine requirements as requested. When printer or other office device records were missing they liaised with facilities/asset management to locate or register devices and coordinated platform‑specific setups. All assignments and status updates were recorded and communicated to requestors through the appropriate channels (for example Microsoft Teams messages, IU Safe secure email).

10. No audio on Cloudya desktop/web calls while local audio devices test OK
62% confidence
Problem Pattern

Cloudya desktop and web calls produced audio failures (no incoming or no outgoing audio) or severe degradation (static, distortion, lag/delay) while the same microphones/headsets worked in other applications. Symptoms were intermittent or frequent and sometimes only occurred on specific networks (office vs home); built-in audio tests occasionally showed output without input and external headsets were sometimes not enumerated. Call drops after ~30 seconds, high latency/jitter/packet loss, and unintelligible audio were reported on affected systems.

Solution

Investigations identified two primary root-cause families: network-level media-path problems and client-side device/permission/firmware issues. Network cases included RTP/UDP media flows being modified, blocked, or degraded by on-prem routers, NAT features or ISP/office network conditions (examples: SIP-ALG interference, NAT handling, or elevated latency/jitter/packet loss). These incidents were diagnosed from call logs and media-level traces and by reproducing calls over a cellular hotspot or an alternate network; calls recovered when the router/NAT features interfering with RTP were removed or when the media path or network segment was corrected, and other incidents improved after users moved to a different network (home/hotspot) that did not exhibit packet loss or high jitter. Client-side cases involved incorrect device selection or enumeration, OS/browser microphone permission blocks, driver or connection faults, and headset firmware defects; those were resolved after the telephony device became exposed to the OS/browser (re-pairing/reconnecting, reinstalling drivers), the correct Cloudya audio device was selected and the Cloudya client or browser was restarted so the device became selectable. Vendor utilities and firmware/driver updates (for example Jabra Direct/System Update) resolved several instances of severe static, interference, or noise. Troubleshooting relied on Cloudya’s built-in audio-test behavior, device-selection checks, diagnostic logs, hotspot/alternate-network tests and call-recordings or network traces to distinguish network-path degradation (latency/jitter/packet loss or RTP modification) from local device or firmware/driver failures. Swapping a workstation without improvement was treated as an indicator of a network- or path-level problem rather than a local endpoint fault.

11. Unexpected outbound calls from Twilio triggered by scheduled tasks on CRM Opportunities
91% confidence
Problem Pattern

Contacts received duplicate or unexpected outbound calls from the organization’s Twilio number, including cases where contacts had explicit no-call/EEB flags or had no follow-up (FUP) created but still remained on POB/telephony call lists. Scheduled follow-up tasks tied to CRM Opportunities or inbound-activity records were sometimes executed early or retriggered after prior attempts or after inbound/DialPad calls; some scheduled call tasks were invisible or could not be removed due to permissions regressions. Calls were also observed being routed into Voice Outbound or POB call stacks unexpectedly, producing parallel or repeated outreach. Affected systems included Twilio, Salesforce Opportunity and inbound-activity Schedules, POB/Push, and DialPad.

Solution

Incidents were mitigated by removing active Twilio call tasks and stopping the dialer/dialing workflows that caused re-presentation. Specific actions that stopped customer-facing calls included deleting queued Twilio Tasks from the Twilio Preview interface; removing unnecessary Planned FUP entries from Opportunity Schedules and inbound-activity Schedules; closing duplicate CRM/MS accounts that generated extra attempts and confirming their Twilio tasks were closed; and temporarily marking an affected Opportunity as “Lost” so the CRM deleted the currently active Twilio task before reactivation. For locations where the dialer retriggered previously attempted Opportunities after Twilio showed no active calls, staff temporarily stopped or disabled the dialer for the affected location. Reporters were also advised to reapply or set opt-in/DOI status on the MS Account and to confirm removal from POB/calling lists when an EEB/no-call was recorded; in at least one reported case an EEB/no-call existed but the contact remained on the POB list and the ticket was closed without documented remediation. Support requested concrete TaskSid examples from reporters to investigate task-level routing and scheduling. During triage development diagnosed scheduling/dialing handling errors: scheduled Twilio calls were not being cancelled during or after inbound/DialPad calls, the POB/Push flow could trigger repeated or parallel dialing attempts, scheduled FUPs could execute earlier than their scheduled date, and permission regressions prevented users from removing unexpected tasks. A permanent code fix for the scheduling and dialing logic was under development at the time mitigations were used.

12. Twilio login/console failing with JavaScript TypeError on Worker records
90% confidence
Problem Pattern

Twilio web UI or console sometimes failed to initialize or render interactive elements, producing a JavaScript TypeError (Cannot read properties of null (reading 'addEventListener')) and leaving the console unresponsive. Twilio Flex occasionally did not display Salesforce case/contact context for calls even when the Salesforce record showed the correct link, and some call flows (including outgoing calls) failed tied to specific Task SIDs. Browser-based Salesforce sign-in/sync flows could stall during SSO/session interactions (commonly in Chrome with an existing Salesforce session), and admin preview/sync sometimes failed when users launched Flex from legacy intranet links.

Solution

Incidents were traced to a small set of account-level causes and one problematic intranet link. Multiple cases where Twilio initialization broke were resolved after missing Worker metadata (notably team/location assignment and skill attributes) was restored; copying the missing attributes from a colleague’s Worker at the same site reinstated the required metadata and removed the JavaScript initialization error and UI failures. Several stalled login/salesforce sync flows were caused by account configuration interacting with browser SSO/session state (commonly Chrome with an existing Salesforce session); a specialist updated account settings during short remote sessions which restored normal sign-in and Salesforce sync. A distinct class of admin preview/sync failures was caused by a legacy intranet “Chihuahua” link: replacing the legacy link with the standard flex.twilio.com workflow and applying the previously used pinned workaround restored preview mode and Salesforce sync for affected admin accounts. Support also restored or enabled individual Twilio access where accounts or authentication were disabled and re-added users’ work numbers to Twilio’s Select Caller ID when those numbers were missing. Cookie-blocker extensions were investigated and were not found to be the root cause. Separately, at least one report where Twilio did not display the linked Salesforce account on the Case (Salesforce showed the correct link) and outgoing calling failed was escalated to the Twilio specialist team; Task SIDs were collected and the vendor was engaged for direct investigation, but no technical fix was recorded in the ticket before it was closed.

13. Avaya Communicator prompted for configuration and prevented agent registration
85% confidence
Problem Pattern

Avaya Communicator on Windows prevented normal agent telephony by prompting for configuration or authentication at launch or by reporting network failures. Observed symptoms included an un-confirmable configuration dialog requesting domain/connection fields, a password/authentication prompt that timed out, or the phone/service message 'Network Currently unavailable'. Avaya Agent Desktop showed agents as 'Not Ready' or appeared available but did not receive inbound calls; issues occurred both on-premises and when remote (VPN).

Solution

Support interventions restored Avaya Communicator sign-in and Agent Desktop telephony in the observed incidents. In multiple cases a remote session reopened Communicator, domain/connection fields were re-entered and saved, the application was restarted, and Agent Desktop subsequently registered and returned the agent to Ready. In other incidents technicians re-established Communicator authentication after a password/login timeout, after which Agent Desktop again received inbound calls. One ticket recorded the phone/service displaying 'Network Currently unavailable' and technicians advised reconnecting VPN, checking network connectivity and for credential sign-in prompts, but no final resolution was documented for that incident. Several tickets recorded only that technician assistance restored Communicator login and telephony without step-by-step details.

14. Teams blocked international outbound calls due to missing PSTN/Calling-Plan license
78% confidence
Problem Pattern

Users were unable to make or receive PSTN calls via Microsoft Teams: international outbound calls failed immediately with an error or were blocked, and some users reported no Teams-based telephony access for phone dial‑ins. Affected systems included Microsoft Teams using Microsoft Calling Plans and Teams integrated with third‑party PSTN trunks (Cloudya/NFON, Vonage).

Solution

International outbound call failures were resolved by provisioning a PSTN calling capability for the affected Teams accounts. In reported cases the Microsoft Calling Plan international add‑on (or equivalent PSTN license) had been assigned to the user's Teams voice subscription, which enabled international dialing. In environments using third‑party trunks the same symptom was resolved after the carrier/trunk (Cloudya/NFON, Vonage) was configured and authorized for international outbound calls. Where Teams‑based PSTN calling was not available in the tenant, the affected users were migrated to Cloudya as an alternative telephony service; provisioning Cloudya required creating a reference user account to enable telephone services on the Cloudya side.

Source Tickets (2)
15. Outbound Cloudya/NFON numbers flagged as 'Fraud/Spam' on recipient Android devices
66% confidence
Problem Pattern

Outbound calls from company DIDs (Cloudya/NFON, Vonage/MM and similar VoIP numbers) were shown on recipients' phones with labels such as 'Fraud/Spam', 'Spam' or 'Suspicious', most frequently on Android devices using Samsung Smart Call. Multiple callers sharing the same DID saw the issue reported by recipients, and the label appeared independently of caller-side settings or CRM. Affected customers often did not answer because of the displayed spam/suspicious label.

Solution

Investigations first collected user reports and screenshots and checked the scope of affected recipients and networks to distinguish carrier/provider reputation issues from device-level filtering. Two distinct outcomes were observed. 1) For carrier/recipient‑side reputation flags, providers (for example NFON) investigated number reputation, re‑provisioned or replaced the DID, and completed caller‑ID/business verification with carriers and Google where applicable; after re‑provisioning and verification the carriers/Google removed the spam flag and the label stopped appearing. 2) For device‑level filtering (notably Samsung Smart Call), analysis showed the smartphone feature marked the number as spam/suspicious; this behavior was not controllable from our systems or the provider and was communicated to requesters as a device‑side limitation. The resolution path therefore depended on the root cause determined during triage (carrier/provider reputation vs. recipient device filter).

Source Tickets (2)
16. Inbound callflow routing caused by outdated service/opening hours
90% confidence
Problem Pattern

Inbound phone numbers and skill routing were treated as if service was available outside intended hours because opening-hour or holiday schedule data on telephony platforms (Avaya, Vonage / Interaction Architect) was missing, outdated, or not applied. Callers reached regular queues or heard the standard IVR instead of a holiday/closed announcement during weekends, holidays, or other closed periods, typically with no error messages. Affected components included inbound callflows, holiday tables, and Interaction Architect skill schedules.

Solution

Callflow and holiday schedule records were corrected on the affected telephony platforms so routing matched declared service availability. Specific fixes included: • EM Sales inbound number (493031198720): opening hours were set to Monday–Friday 07:00–19:00, weekend shifts were removed, and the changes were saved and applied so callers were no longer routed during unintended times. • Schools phoneline (Avaya): a Holiday table was created, a closure was scheduled for 20 Dec 2024 16:00 to 2 Jan 2025 08:30, and the Christmas greeting was applied so callers heard the closure announcement. • FS Studsek (Vonage): a holiday entry was added in Vonage (assistance logged 2024-07-19) so the closed/holiday announcement and routing took effect. • IU Akademie (Vonage Interaction Architect): the Interaction Architect availability/schedule for the “SO Upskilling *82” skill was updated to Monday–Thursday 09:00–17:00 and Friday 09:00–16:00 (change applied on 24/25), and the ticket was closed. Several tickets contained no step-by-step implementation details; the recorded changes were limited to the updated schedules and applied holiday entries on the respective platforms.

17. Avaya reporting does not retain or expose per-direct-number call-volume data
90% confidence
Problem Pattern

Request for call-volume statistics for specific direct phone numbers (extensions 7109 and 7119) from September 2024 to present; Avaya/telephony reporting showed call-centre metrics but no equivalent data for individual direct lines. No error messages occurred — symptom was missing or absent call statistics for the direct numbers in the reporting system.

Solution

The request was escalated to the specialist telephony/reporting team, who confirmed the reporting system only retained and exposed aggregated call-centre statistics and did not store or surface per-direct-line (individual DDI/extension) call-volume data. Consequently, historical call-volume reports for extensions 7109 and 7119 from the requested period could not be produced, and the requester was informed that the data was unavailable.

Source Tickets (1)
18. Twilio blocking outbound WhatsApp when WAPP consent record appears stale
91% confidence
Problem Pattern

Outbound WhatsApp messages via Twilio were intermittently blocked or absent from conversation history, frequently after channel switches or when a conversation exceeded WhatsApp’s 24‑hour session window. Symptoms included inability to send personalized messages with no explicit Twilio error codes, a persistent Twilio red banner such as “Direct message is unavailable” or “free-text disabled,” automatic generic post-switch messages that blocked subsequent personalized sends until a reply, and tasks created without an active chat or call (chatChannelSid = null, callSid = null) which prevented outbound sends. Incidents also coincided with stale consent records in WAPP/Salesforce and occasional WAPP platform outages.

Solution

Multiple distinct root causes produced the same observable failures; incidents were resolved as follows.

• Stale or out-of-sync consent records: Specialists removed and recreated the applicant’s consent entry in the active consent store (WAPP or Salesforce). After a short propagation delay Twilio accepted previously blocked outbound and inbound messages and conversation history populated.

• Channel-switch tasks created without an active chat or call: Investigations showed some channel-switches (commonly after a call result of “not reached”) produced Tasks that were accepted but lacked an active chat or call reference (chatChannelSid = null, callSid = null). Once an active chat channel was established (for example by re-creating the chat channel or re-triggering the channel-switch flow) outbound WhatsApp sends were accepted and conversation history populated.

• Template configuration or added free-text during channel-switch: Sends that failed when a template was used with added free-text succeeded after teams corrected template usage/management and retested sends; subsequent sends were accepted when only approved template content was used where required.

• Meta/WhatsApp channel-switch policy: Incidents where an automatic generic WhatsApp message was sent after a channel change blocked personalized outbound messages until the recipient replied to that automatic message; those incidents cleared once the automatic post-switch message and the recipient reply window completed.

• WhatsApp 24‑hour conversation/session rule: Messages attempted outside the 24‑hour window required WhatsApp-approved templates. Affected teams sent approved templates (for example, previously observed cases used approved notification templates) and subsequent sends were accepted.

• Operator reply UI behaviour: Replies entered outside the notification-reply control or not following the expected template/reply flow failed; once replies were entered via the notification-reply flow and conformed to the expected format replies delivered.

• WAPP platform incidents/outages: Some widespread send failures were traced to WAPP service incidents. Specialists performed WAPP incident mitigation/cleanup; after mitigation Twilio sends recovered. Additional Twilio SIDs were collected when further occurrences were observed.

• Twilio UI anomalies: Twilio logs often did not present explicit error codes for these conditions, and the UI sometimes displayed a persistent red banner (eg. “Direct message is unavailable” or “free-text disabled”) even when test sends succeeded. Support teams performed test sends to validate delivery and engaged the product team to clarify UI behaviour.

Across cases, resolution actions matched the root cause: consent-store reconciliation, establishing an active chat channel for the Task, template-management corrections, completing the automatic post-switch/reply flow, using approved templates for 24‑hour-window sends, WAPP service mitigation, or correcting operator reply controls. After the applicable remediation, Twilio accepted previously blocked outbound and inbound WhatsApp messages and conversation histories populated.

19. Inbound call qualification failure caused by recent code regression
90% confidence
Problem Pattern

After answering inbound calls, agents were unable to complete the 'qualify' action in the call workflow; no explicit error codes were reported. The symptom was a failure to save/complete call qualification immediately following inbound call handling, impacting the call-qualification feature and inbound call workflows.

Solution

The issue was escalated to the specialist team, who identified a small regression introduced in a recent change. Developers implemented a code fix for the bug and deployed it to production. After deployment the call-qualification functionality was restored and agents were able to qualify calls following inbound calls.

Source Tickets (1)
20. One-off PBX call forwarding activation for a specific DID
95% confidence
Problem Pattern

Inbound calls to specific DIDs or internal extensions on the corporate PBX (Cloudya) failed to reach the expected device or were routed to the wrong region/queue; callers sometimes heard messages such as "Person at extension <ext> is not reachable" and callbacks failed. Tickets also reported requests to forward numbers that had no Cloudya extension (for example personal mobile numbers or provider-managed numbers not yet enabled for inbound calling). Reports commonly occurred during planned absences, offboarding, or when sites were temporarily unstaffed and staff requested region-wide routing to a shared region phone.

Solution

Support first verified whether the affected number or user had an extension assigned in the corporate PBX (Cloudya). For DIDs/extensions managed in Cloudya, support configured immediate call forwarding or rerouted the affected DID(s)/extension(s), obtained any required approvals (for example via an Application Request), activated the new routing, and verified behavior by placing test inbound calls. Region-level requests were handled by implementing Cloudya routing for the shared region phone (for example a Region Ost setup covering multiple sites) so staff could access region phones when a site was unstaffed; these changes were approved as required, activated, and tested. Where appropriate, numbers were re-routed to the correct customer-service region or queue or escalated to the specialist team. For provider-managed numbers that were not yet enabled for inbound calling (for example Twilio-managed landlines awaiting DS inbound activation), support applied Cloudya forwarding rules to route the provider number temporarily to the user’s mobile. For requests referencing numbers not managed by the PBX (for example personal mobile numbers with no Cloudya extension), support confirmed no extension existed, informed the requester that the PBX could not forward an external mobile number, and closed the ticket without applying changes. Support also confirmed whether users preferred self-service via the Cloudya portal or support-assisted changes and coordinated related offboarding decisions (email/autoreply retention, account deletion, hardware return).

21. Twilio outbound call failure 45301 caused by incorrect or misformatted destination number
90% confidence
Problem Pattern

Twilio outbound voice calls and PowerOutBound (POB) tasks returned error messages such as "Unable to connect your call. Please try again or contact support. [45301]" (one report included 4503), causing calls to drop immediately, hang up after the first ring, fail to complete, or cause POB tasks to stop arriving. Failures affected Twilio and attached contact-dialer/dialer platforms. Observed triggers included invalid or misformatted destination phone numbers (including formats that caused Twilio Lookup Service failures) and contact-dialer account misconfigurations such as admin/agent accounts with no assigned skills that prevented routing or PSTN termination.

Solution

Two classes of Twilio outbound failures that returned 45301 (and one 4503) were identified and resolved. In incidents caused by invalid or misformatted destination numbers — including number formats that caused Twilio Lookup Service failures — destination numbers were normalized to E.164 (correct country codes and removal of extraneous characters) and an internal bugfix addressing the Lookup behavior was deployed; subsequent outbound calls connected and 45301 failures did not recur. In separate incidents where calls were immediately dropped or PowerOutBound tasks stopped arriving despite basic network checks, failures were traced to contact-dialer account misconfigurations (admin/agent accounts with no required skills assigned); assigning the required skills restored routing and eliminated the 45301 failures.

22. Twilio Caller ID registration blocked when service mobile cannot receive verification calls
93% confidence
Problem Pattern

Outbound calls originating from Twilio and integrated systems failed to present the expected caller ID or to connect. Symptoms included default or random numbers displayed, a greyed‑out or absent caller‑ID selection, caller‑ID switching mid‑call, failed transfers or immediate closure of click‑to‑dial UI, and occasional error “Unable to connect your call (45391)”. Some incidents produced no TaskSID for initial troubleshooting.

Solution

Support obtained the service or corporate number in international format and registered it as a verified Caller ID in Twilio/Twilio Flex; this step resolved many cases where outbound calls showed Twilio defaults, other users’ numbers, or random numbers. When automated Twilio verification callbacks failed because the service handset, Cloudya/NFON provisioning, or Twilio admin permissions were unavailable, verification was paused until provisioning/permissions were restored; where callbacks were impossible verification was completed via a short joint Teams/PSTN verification call or by entering the Twilio verification code on the user’s mobile. Manual re‑entry, deletion/recreation, or direct population of the Caller ID field in Twilio cleared greyed‑out UI behaviour and resolved cases where Dialpad/Click‑to‑Dial closed immediately or would not initiate calls. Accounts that contained an obsolete or unverified number produced intermittent outbound failures and the Twilio error that the outbound number was not a verified caller ID; these were resolved by removing the outdated number and assigning the correct verified number. Twilio account and power_outbound settings were reconfigured when necessary; in cases of random caller‑ID switching a Twilio engineer applied account‑side changes so only the user’s own Caller ID appeared. For Virtual Campus/B2C endpoints personal mobile numbers were not assigned and users were directed to use the provided site/VC ID so calls and transfers routed and accepted correctly; specialists likewise advised selecting the appropriate location ID for B2C accounts. For Dialpad presentations support enabled the verified number in Twilio, set the appropriate Caller ID in Dialpad, and updated site/contact numbers when they were outdated. Cloudya/NFON access and provisioning (passwords and provisioning tests) were restored before completing Twilio verification; where an internal administrator lacked Twilio admin permissions verification and configuration were completed using or by granting an account with Twilio admin access. Several tickets were closed without completion when required approver information or phone details were not provided; those requests were completed only after the approver (team lead or cost‑center owner) and phone details were supplied. In at least one instance a Twilio login/authentication problem was resolved by re‑authentication before Caller ID configuration could be completed. Additional diagnostics captured after these tickets showed that some outbound failures returning error 45391 correlated with transient network instability during call initiation; those incidents were investigated with network tests (including wired LAN tests) and by collecting TaskSIDs and screenshots when TaskSIDs were not initially present to enable deeper Twilio-side debugging.

23. Outbound calls not processed when Power_Outbound enabled before its scheduled start
87% confidence
Problem Pattern

When the Power_Outbound/PowerOutbound service was enabled before its configured scheduled start, outbound calls and scheduled callbacks were not processed or were processed incorrectly. Twilio showed calls stuck in Available or Reserved Calls counts that did not update; manual adjustments to reserved call counts had no effect. User-visible symptoms included no outbound calls being placed, scheduled Call Appointments (CBAs) not triggering or being deprioritized, call-pull failures from the pipeline, brief rings with subsequent audio loss, and Dialpad call‑preview UI freezing. Affected systems included Power_Outbound, Twilio, Dialpad/shared-number configurations, and Salesforce callback records.

Solution

Investigations identified that Power_Outbound had a configured scheduled start and would not properly process outbound calls or scheduled callbacks when it was enabled before that start time. In multiple incidents outbound calling and callback processing were restored either after the service reached its configured start time or when support re-enabled/fixed the Power_Outbound/PowerOutbound service. Observed Twilio symptoms included calls remaining in the Available queue and not being pulled, the Reserved Calls UI field failing to increase or reflect processed calls, and manual adjustments to reserved call counts having no effect. Recorded remediation actions that coincided with resolution included re-enabling Power_Outbound and allowing the service to enter its scheduled window; one support attempt also restarted the Twilio integration and cleared the browser cache. A Dialpad-specific incident with ~2 rings then audio loss and a frozen call-preview UI was resolved by re-enabling Power_Outbound; support additionally arranged assignment of a personal phone number to reduce reliance on a shared number. Several related tickets were closed as resolved without documented technical steps.

24. Vonage WebRTC add-on missing from webshop after platform update
90% confidence
Problem Pattern

Users reported that the 'Web RTC for Vonage CC' add-on was missing from the Vonage webshop (only 'Screen Lock' was shown). Users questioned whether the add-on had been renamed (for example to 'Balto Vonage') or why it no longer appeared. Affected systems included Vonage CC, the webshop, Contact Pad and integrations like Balto.

Solution

The issue was caused by a prior Vonage platform update that removed the standalone 'Web RTC for Vonage CC' add-on. Vonage had merged the WebRTC functionality directly into the Contact Pad, so the separate webshop add-on was no longer published or required. The webshop showing only 'Screen Lock' was expected behaviour, and the add-on had not been renamed to 'Balto Vonage'. The user was informed and the ticket was closed.

Source Tickets (1)
25. Twilio POB inbound call preview behavior varies by opportunity stage
95% confidence
Problem Pattern

Twilio-connected inbound CRM calls showed two related issues: (1) call routing behavior varied by opportunity stage — calls tied to the incoming funnel (Leads, BOBs, applications in status "Eingang") were delivered immediately without an agent preview while calls for later opportunity stages presented a preview/selection menu; and (2) the Opportunity popup ("Oppy") sometimes failed to open on Twilio inbound calls with no error codes, persisting after client restarts. Affected systems included Twilio POB, the Oppy CRM integration, and (for routing requests) Vonage and DACH organizational units.

Solution

Investigations distinguished two separate Twilio-related issues and one planning request. For Twilio POB call-routing behavior, the team determined the behavior was intentional: incoming-funnel calls (Leads, BOBs, applications in status "Eingang") were configured for immediate delivery without an agent preview to prioritize rapid handling, while calls tied to later opportunity stages were routed with a preview/selection menu so agents could prepare; the configuration rationale and resulting behavior were communicated to stakeholders. For inbound Oppy failures, support executed standard troubleshooting (client restarts and checklist), escalated to developers, and developers contacted Twilio; the issue was identified as a known vendor-side defect with no local fix available and the case remained pending a Twilio/developer patch (a separate instance had temporarily resolved after issuing a new laptop, but that was not confirmed as a general resolution). For the DACH Vonage routing request, stakeholders met to gather requirements and agreed to evaluate routing options based on Opportunity Status (example: Opportunity Status "Definite" → forward to Studierendensekretariat); an implementation approach and timeline remained in planning pending that evaluation.

Source Tickets (3)
26. Provisioning of specific Twilio phone-number requests
90% confidence
Problem Pattern

Requests to provision, attribute or register specific phone numbers or Twilio IDs/callback numbers were reported, often listing exact numbers. Symptoms included numbers not appearing in a provider’s inventory, inability to assign individual Twilio/Dialpad IDs due to policy, requests to split or create separate provider lines with matching forwarding, and existing inbound calls needing routing from legacy providers. Affected systems included Twilio, Dialpad, Vonage, Cloudya/NFon, Intracall and Questnet; requests frequently involved toll-free numbers and billing/PO attribution.

Solution

Requests to provision, locate or register specific phone numbers or Twilio IDs/callback numbers were handled according to the system and organisational policy. When a requested number existed in Twilio it was assigned to the requester’s Twilio account. For B2C staff, policy prevented issuing individual Twilio/Dialpad IDs; those numbers were registered to the standardized shared Dialpad ID “PreSales_DS_DeinStandort” and requesters were informed. Inventories in NFon/Cloudya and Vonage were searched for provided numbers; located numbers had ownership, routing and billing/PO attribution documented (for example a toll‑free number found in NFon was identified as a DS Sales number). Numbers not found in provider inventories were escalated to the Twilio team for further Twilio-side search. Requests to create separate lines in Vonage were implemented by provisioning a new line and applying the same forwarding configuration as the existing line while leaving the original line unchanged. During migrations or transitions inbound routing was configured so specified numbers were routed from legacy providers (such as Intracall or Questnet) to Twilio while other numbers remained on the legacy platform. Requests to add Twilio callback IDs were completed by adding the callback number/ID to the user’s Twilio configuration and recorded in ticket comments; occasionally ticket resolution fields contradicted completion comments (e.g., showing “Won't Do” despite a completion note) and ticket comments were relied on as the operational record. All unresolved cases or unusual findings were forwarded to the specialist telephony team for confirmation before ticket closure.

27. Unexpected callbacks traced to worker phone-number mapping and routing logic
91% confidence
Problem Pattern

Calls, callbacks and lead deliveries were intermittently misrouted to incorrect agents, queues, or geographic regions, producing wrong caller‑ID associations and callbacks delivered to unintended contacts. Symptoms included short rings that dropped without provider error codes, missing or non‑correlating call‑history entries when searching by account phone number, imbalanced round‑robin task distribution with large Salesforce overdue‑call backlogs for individual agents, and Dialpad‑originated calls failing Opportunity association with the error 'No Account UUID Found'. In some cases inbound calls were received despite an agent/channel being toggled off (availability slider), sometimes appearing minutes after the toggle and without a Task‑Sid available for tracing. Affected systems included Twilio, Vonage, IVR/callflow logic, contact‑center worker/queue mappings, Power Outbound, Dialpad, and Salesforce.

Solution

Support investigated recurring misrouted calls, callback delivery errors, lead‑assignment mistakes, and call‑history visibility gaps across Twilio, Vonage and contact‑center tooling and identified several distinct root causes with targeted remediations or investigative actions. Findings and confirmed remediations included:

• Contact‑center routing and worker‑record mapping: Provider number ownership and internal worker/queue assignments were audited and incorrect routing records or queue assignments were corrected.

• Twilio provider‑record anomalies and call‑forwarding: Twilio number/stub entries and call‑forwarding configurations were reviewed; incorrect stub/number assignments and forwarding rules were corrected where present and provider logs were retained for follow‑up on unresolved cases.

• IVR/callflow fallback behavior: A Vonage IVR exercised a configured fallback path that routed confirmed selections to alternate teams when a target queue/agent was unreachable; IVR fallback mappings were corrected.

• Agent state, connection timing, and outbound profiles: Twilio logs showed connection attempts to agents whose UI state appeared Available/Owner/Sticky but whose sessions did not complete; resetting user/outbound profiles or correcting agent‑state synchronization restored delivery in multiple cases.

• Round‑robin/task distribution and Salesforce backlog: One user’s inbound task receipts were inconsistent with large Salesforce overdue‑call counts; assignments and RR pool membership were validated, overdue counts were captured, and task distribution/prioritization was escalated to operations for vendor/ops analysis where config changes were not sufficient.

• Number‑assignment/configuration errors: Callbacks routed to other contacts were traced to misconfigured mobile numbers at the provider; endpoints were updated after confirming the correct mobile with users.

• Recipient‑side spam labeling and caller‑ID reputation: Outbound Twilio numbers were sometimes labeled as spam on recipients’ devices, reducing answer rates; affected recipients were advised to mark numbers Not Spam and company mobiles were used as short‑term workarounds. Suspected spoofing/abuse cases were escalated to carriers/providers with precise timestamps and provider logs for reputation investigation.

• Power Outbound and lead‑routing: Isolated lead deliveries to incorrect teams were corrected by updating lead‑routing/assignment records for affected users.

• DS control‑dashboard visibility and call‑history correlation: Missing call‑history entries were traced to dashboard view/filtering or sync/indexing issues rather than absent provider call records; dashboard/view configuration issues were escalated for correction.

• Salesforce–Twilio and Dialpad integration: A general Salesforce–Twilio connection problem was identified and resolved, restoring Opportunity auto‑association for standard Twilio calls. Dialpad‑originated manual calls continued to fail Opportunity association with the Dialpad error 'No Account UUID Found'; Dialpad‑specific logs were retained and that issue remained under vendor investigation.

• Channel/availability toggle observation: At least one incident was observed where an inbound call arrived approximately 2–3 minutes after a user had toggled the channel off via the availability slider. No Task‑Sid or Opportunity reference was provided for that occurrence, so the event remained untraced and was placed under monitoring pending recurrence and provider logs.

Investigative artifacts used across incidents included Twilio and Vonage provider call logs and TaskSids, IVR/callflow mappings, contact‑center worker/queue records, Salesforce agent‑state and backlog checks, Power Outbound profile resets, recipient spam‑status checks, and Dialpad integration/error logs where available. Confirmed remediations consisted of correcting routing and IVR fallback logic, updating provider number/stub assignments and call‑forwarding endpoints, resetting user/outbound profiles, adjusting Power Outbound lead routing, advising on spam‑label handling and using alternate mobiles for urgent calls, engaging carriers/providers for caller‑ID reputation investigations, and escalating dashboard indexing/configuration faults. A small set of cases (notably the Dialpad Opportunity association failure, some round‑robin imbalances, and the channel‑toggle inbound case lacking trace identifiers) required further vendor or operations investigation and were retained for additional tracing using TaskSids, timestamps and provider logs.

28. Perceived Power Outbound outage caused by low task/applicant volume, not platform failure
91% confidence
Problem Pattern

Outbound or inbound telephony appeared non-functional: outbound calls stopped or were extremely sparse and inbound callers sometimes heard an "all agents are busy" message. Telephony UIs showed empty task or queue counters and tasks often did not enter Twilio queues; the Twilio dialer at times failed to pull calls from queues (including cases where the dialer’s configured 'calls pulled' value was changed with no immediate effect). Ring times were unusually short, dialing delays were long, and some outbound calls presented unexpected caller IDs. Affected systems included Power Outbound, the Twilio dialer and Dialpad; users occasionally saw Dialpad client errors such as "try again or contact support."

Solution

Support analysis found these incidents were typically caused by empty or starved task queues rather than a single platform outage. Root causes identified across investigations included genuinely low or seasonal lead/opportunity volume, colleagues having already processed queued tasks (task counters showing 0), and upstream ingestion or task-push failures (partial imports, stale or missing leads, or ingestion backlogs). Dialing and routing behavior were sometimes impacted by site/server load or Twilio dialing delays; a subset of reports included misassigned outbound caller IDs. No consistent Twilio or Dialpad platform exceptions or error codes were observed; one case recorded a persistent Dialpad client error "try again or contact support" that did not clear after a client reboot or basic connectivity checks.

Support actions and observations that led to restorations included verifying outbound-channel flags (for example, Voice_Outbound and Power_Outbound), checking Twilio queue activity and agent-to-site assignments, and inspecting dialer configuration such as the "calls pulled" parameter (support recommended increasing the configured calls-pulled value — e.g., from 30 toward ~50 — although increasing this value did not always immediately trigger dialing). Support also ensured relevant Salesforce applicants/records were active and assigned; re-ingesting or scheduling tasks and clearing ingestion backlogs correlated with recoveries. Where dialing or number anomalies were configuration-related, outbound-number assignments and routing were escalated to specialists and CRM/import teams engaged for partial-import or push-backlog conditions.

Applied mitigations recorded in tickets included using a live-activity visibility account and designating an internal Twilio key user as a backup monitor, temporarily reallocating agents or reverting to manual telephony while queues refilled, and monitoring until activity resumed. Some incidents had no definitive fix recorded at triage and were resolved after additional troubleshooting calls or when inbound task/activity volume returned.

29. Cloudya (CARE) web app full unavailability due to provider-side outage
90% confidence
Problem Pattern

Cloudya/CARE became inaccessible due to a provider-side service outage, preventing users from loading the web application or opening the Cloudya app. Users reported unexpected logouts, inability to authenticate, and missing or delayed password-reset emails, typically without client-side error codes. Symptoms affected multiple accounts and persisted until the provider restored service.

Solution

Support identified the incidents as provider-side Cloudya/CARE service outages and escalated them to the platform/provider operations team. The provider worked the outage and restored service; support then verified access across affected accounts. After restoration, support completed or retriggered password-reset flows and resolved lingering client issues (in one case via a Microsoft Teams remote support session when the Cloudya app would not open). During the outage, no user-side remediation reliably restored service; issues such as missing password-reset emails and failed sign-ins were resolved only after the provider restored availability.

30. Cloudya upgrade request blocked by lack of administrative credentials
90% confidence
Problem Pattern

A user requested an in-place upgrade of the Cloudya client from v1.7.7 to v2.0.0 but could not start the rollout because they did not possess administrative access or credentials for the Cloudya packaging/rollout process. Affected system: Cloudya; symptom: upgrade not initiated by requester despite request.

Solution

The ticket was escalated to the Cloudya specialist/packaging team, who initiated and completed the rollout to Cloudya 2.0.0 on behalf of the requester. The specialist asked the user to verify functionality after deployment and closed the ticket once the rollout was finished.

Source Tickets (1)
31. Salesforce telephony feature unavailable due to missing permissions/approval
80% confidence
Problem Pattern

A user reported that telephony functionality in Salesforce was unavailable for a named user; there were no error messages and the feature appeared inaccessible pending approval or permission changes. Systems involved: Salesforce and associated Twilio telephony integration.

Solution

Support clarified their scope (they only handled account creation) and responded by providing the requester with the correct channels to request the necessary permission and service changes: the Permission Change portal, the Bug Reporting portal, and the Twilio Requests portal. The request was left awaiting approval after the requester was directed to those portals.

Source Tickets (1)
32. Twilio initial-login frontend TypeError from null addEventListener during first-time setup
90% confidence
Problem Pattern

During first-time login/configuration in Twilio the browser frontend threw a JavaScript TypeError: "Cannot read properties of null (reading 'addEventListener')", which caused the onboarding UI or initial setup flow to fail or lose functionality. The error occurred in the client-side configuration path and prevented completion of the initial account configuration in the Twilio console.

Solution

The Twilio initial-login/configuration frontend code was patched to handle the null reference before attempting to call addEventListener. The fix was deployed to the affected environment and the user retested the onboarding flow; the initial setup and UI functionality worked after the deployment.

Source Tickets (1)
33. Twilio sign-in failure due to missing password-reset email — Omnichannel-handled account recovery
90% confidence
Problem Pattern

Users were unable to sign into Twilio accounts, including failures during the mandatory first-login. Reported symptoms included the 'Forgot password' flow not working or password-reset/recovery emails not being received; no error codes were displayed. In-house IT was unable to perform account or password recovery for Twilio-managed accounts.

Solution

Internal IT confirmed they could not perform Twilio account or password recovery for Twilio-managed accounts and directed affected users to the Omnichannel team. Omnichannel handled account and password recovery after users submitted requests via the Omnichannel support form/link. Separately, some login failures that involved a non-working 'Forgot password' flow or missing reset emails were resolved by having users sign in via the official Twilio login link using their existing Twilio credentials rather than triggering a password reset.

Source Tickets (2)
34. Verifying Vonage license for call-tracking / virtual-number campaign attribution
60% confidence
Problem Pattern

Requester asked whether an existing Vonage license supported call-tracking or campaign-tracking using virtual phone numbers. A measurable share of inbound conversions were not being attributed in Salesforce because phone-originated leads did not trigger an online first-conversion event. Systems involved were Vonage (programmable/campaign tracking) and Salesforce; no error messages were reported.

Solution

The inquiry was escalated to the Vonage specialist team and the requester was contacted via Microsoft Teams to explain the current status. The specialist team had not been able to confirm within the ticket whether the existing license included virtual-number call-tracking/campaign-tracking capabilities, so the request remained with vendor-specialist review and clarification of licensing/features was documented for follow-up.

Source Tickets (1)
35. Vonage account provisioning and Salesforce record linkage for new users
90% confidence
Problem Pattern

Requests to provision a Vonage account and link it to the user's Salesforce record. Symptoms included inability to create the account when the ticket omitted a required reference/comparison user, or when the supplied reference user had no Vonage account. Some requests originated from departments that use Cloudya rather than Vonage, causing an inappropriate product request. Affected systems: Vonage and Salesforce (Cloudya when relevant).

Solution

Vonage accounts were created and the account details were recorded and linked to the users' Salesforce records when a valid reference/comparison user was available (provisioning actions were documented in Salesforce — e.g., Haynert, Daniel on 2025-07-22 09:37). When a ticket lacked a required reference user, support requested one from the requester and proceeded to create and record the account after it was provided. When the supplied reference/comparison user did not have a Vonage account, support determined the request originated from a department that typically used Cloudya and therefore did not create a Vonage account; the need for Cloudya was noted in the ticket. All outcomes and product decisions were documented in Salesforce.

36. Calls terminating ~2 seconds after answering for a single external contact
85% confidence
Problem Pattern

Calls (both inbound and outbound) involving one specific external contact consistently dropped after approximately 2 seconds — immediately after the initial greeting — with no error codes shown. The issue reproduced across multiple call attempts and appeared isolated to that contact rather than affecting other numbers or system-wide calls.

Solution

Specialist investigation found no telephony-system or configuration errors and concluded the symptoms were consistent with unstable or poor network/connectivity on the applicant's side. No server-side or platform changes were applied; the case was closed pending the applicant restoring a stable connection and retesting.

Source Tickets (1)
37. Intermittent DTMF tones not recognised by IVR for specific incoming numbers
60% confidence
Problem Pattern

Callers intermittently reported that DTMF digits could not be sent or recognised when prompted by the IVR; incidents affected specific external numbers and occurred on landline and mobile callers across multiple carriers, were not reproducible by support, produced no error codes in logs, and appeared sporadic rather than system-wide.

Solution

The incident was escalated to the telephony specialist team. They inspected the DTMF-transfer setting for the affected extension (studentsoffice ext. 855) and confirmed DTMF transfer was enabled. Call/IVR logs and provider traces were reviewed but showed no systemic or reproducible fault; local tests of DTMF detection succeeded. The outcome documented the configuration as correct and the issue as intermittent with no immediate root-cause identified, and the case was left under active monitoring for recurrence with provider-trace capture if the problem reappeared.

Source Tickets (1)
38. POB/Oppy session stuck in a phantom 'dummy call' causing no dial tone
90% confidence
Problem Pattern

Power Outbound (POB) launched but immediately stopped and produced no dial tone; the Oppy overview loaded with only a single 'hang up' option and the call UI remained unusable. Users reported a persistent, non-terminating call state (appearing as a "dummy call") that prevented normal dialing or outbound operations.

Solution

A technician determined the symptom was a stuck/phantom "dummy call". The user executed the documented procedure to clear/unhang the dummy call (referred to as "Festhängen in sog. Dummy-Calls"). After the phantom call was cleared, POB resumed normal operation and dial tone/functionality was restored.

Source Tickets (1)
39. Twilio blocking outbound international calls due to destination number risk classification
51% confidence
Problem Pattern

Outbound calls placed through CPaaS providers (notably Twilio) to specific international numbers or prefixes were intermittently blocked or failed to connect. Symptoms included immediate call termination with messages such as "This number is not available" or generic "call cannot be completed" messages, silent failures, or per-destination/prefix blocking visible in provider consoles (mixed "Blocked" / "Allowed - enabled" states). Affected destinations were sometimes reachable from mobile/PSTN but failed when routed via the CPaaS; failures could be associated with provider destination-number risk classification, carrier routing decisions, or geo-permission triggered by caller IP geolocation or source network.

Solution

Incidents were attributed to provider-level destination-number risk classification, carrier routing policies, or provider geo-permission rules that prevented calls to affected prefixes or individual numbers. In resolved cases Twilio reclassified the impacted destination ranges and adjusted the account’s international dial permissions/whitelist which restored service. Twilio required concrete example destination numbers and justification to complete reclassification. Provider-specialist teams routinely requested provider-side call identifiers and traces (for example CallSid/TaskSid, carrier call GUIDs and internal traces such as Oppylink) so the downstream carrier could review classification and routing decisions. Investigations observed mixed dialing-permission states in the Twilio console for samples (some numbers shown as "Blocked", others as "Allowed - enabled"), and geo-permission blocks that correlated with caller IP geolocation or use of a neighboring-country network (notably users near borders). Where a destination was also unreachable from a mobile device the failure reflected a PSTN-side outage rather than CPaaS blocking; conversely, destinations that worked from mobile but failed via the CPaaS typically indicated provider-side blocking or routing policy. Because many incidents presented no explicit error codes, collecting exact example destination numbers plus any provider-side call identifiers materially improved the provider’s ability to diagnose and restore service. One Vonage report recorded a carrier Call GUID for an intermittently blocked UK number but had no definitive resolution documented. For some regions (for example DE) numbers were generally enabled in Twilio except those classified as "Extreme High Risk."

40. Twilio POB creating duplicate outbound tasks and repeatedly calling same Salesforce opportunities
46% confidence
Problem Pattern

Twilio Power Outbound repeatedly delivered the same Salesforce Opportunity contact (Oppy) to agents, creating duplicate outbound tasks and distinct Twilio Task SIDs for the same Opportunity. Repeats occurred in short windows (from about one hour to multiple days after prior contact), with some contacts redialled multiple times while the dialer showed reserved calls for other records but no new Oppys were delivered. No Twilio error codes were returned and the behaviour affected multiple users and time windows.

Solution

Support engineers captured call timestamps, duplicated Twilio Task SIDs, Salesforce Opportunity identifiers and session logs and escalated the consolidated data to Twilio for product-level analysis. In at least one occurrence, support scheduled and implemented a systemic deployment intended to stop repeated Oppy deliveries and asked the user to retest; that ticket was closed as Done without detailed logs or a documented root cause. Multiple incidents showed the dialer reporting reserved calls for other records while repeatedly delivering the same Opportunity and returning no Twilio error codes. Across the ticket corpus there was no definitive corrective configuration change or vendor-confirmed root cause recorded; vendor analysis was pending while incidents recurred during the June–August 2024 window.

Source Tickets (3)
41. Scheduled callback (Rückruftermin) recorded but not executed in Twilio
35% confidence
Problem Pattern

A scheduled callback appointment was logged by the internal scheduling system and appeared in system logs at the scheduled time, but Twilio (and/or the MS-managed service component) did not execute the outbound callback. The symptom was a missing/missed callback with no Twilio error codes recorded; the behaviour manifested as an expected call that never reached agents or customers.

Solution

Investigation confirmed the callback record had been received into the system at the scheduled timestamp but was not acted upon by the managed service component. The incident was documented for deeper investigation by engineering, however no concrete remediation steps or root-cause determination were recorded before the ticket was closed.

Source Tickets (1)
42. Twilio Power Outbound slow call setup and repeated follow-up (FUP) assignment
40% confidence
Problem Pattern

Outbound call setup via Twilio Power Outbound experienced excessive call-establishment delays (calls taking up to ~2 minutes to connect). The same Opportunity was sometimes assigned as follow-up (FUP) multiple times (duplicate FUPs), and duplicates persisted even after removing the FUP. The issue affected the team broadly and manifested as long dial latencies and duplicated Opportunity assignments.

Solution

The incident and a representative TaskSid were collected and the case was escalated to Twilio support for analysis. The ticket record shows it was forwarded to the vendor/engineering contact for deeper troubleshooting but contains no documented technical fix or configuration change; the ticket was closed after escalation without an internal remediation logged.

Source Tickets (1)
Back to Summaries
An unhandled error has occurred. Reload X