Telephony
Cloud
Last synthesized: 2026-02-12 23:31 | Model: gpt-5-mini
Table of Contents
1. Cloudya/Nfon account, extension and phone-number provisioning failures
2. Shared external numbers and Cloudya call-queue / ring-group setup limitations
3. Vonage account roles, call-group membership and dialer/browser integration failures
4. Provisioning corporate SIMs for WhatsApp Business and PC-based SMS access
5. Temporary mobile data increase to 'unlimited' via Telekom self-service portal
6. Cloudya desktop window positioned off-screen on multi-monitor setups
7. Twilio EKG showing orange warning triangle and WhatsApp messages stuck loading
8. iPhone not receiving incoming calls due to unconditional call forwarding
9. Provisioning Cloudya accounts and assigning geographic DIDs for external users
10. No audio on Cloudya desktop/web calls while local audio devices test OK
11. Unexpected outbound calls from Twilio triggered by scheduled tasks on CRM Opportunities
12. Twilio login/console failing with JavaScript TypeError on Worker records
13. Avaya Communicator prompted for configuration and prevented agent registration
14. Teams blocked international outbound calls due to missing PSTN/Calling-Plan license
15. Outbound Cloudya/NFON numbers flagged as 'Fraud/Spam' on recipient Android devices
16. Inbound callflow routing caused by outdated service/opening hours
17. Avaya reporting does not retain or expose per-direct-number call-volume data
18. Twilio blocking outbound WhatsApp when WAPP consent record appears stale
19. Inbound call qualification failure caused by recent code regression
20. One-off PBX call forwarding activation for a specific DID
21. Twilio outbound call failure 45301 caused by incorrect or misformatted destination number
22. Twilio Caller ID registration blocked when service mobile cannot receive verification calls
23. Outbound calls not processed when Power_Outbound enabled before its scheduled start
24. Vonage WebRTC add-on missing from webshop after platform update
25. Twilio POB inbound call preview behavior varies by opportunity stage
26. Provisioning of specific Twilio phone-number requests
27. Unexpected callbacks traced to worker phone-number mapping and routing logic
28. Perceived Power Outbound outage caused by low task/applicant volume, not platform failure
29. Cloudya (CARE) web app full unavailability due to provider-side outage
30. Cloudya upgrade request blocked by lack of administrative credentials
31. Salesforce telephony feature unavailable due to missing permissions/approval
32. Twilio initial-login frontend TypeError from null addEventListener during first-time setup
33. Twilio sign-in failure due to missing password-reset email — Omnichannel-handled account recovery
34. Verifying Vonage license for call-tracking / virtual-number campaign attribution
35. Vonage account provisioning and Salesforce record linkage for new users
36. Calls terminating ~2 seconds after answering for a single external contact
37. Intermittent DTMF tones not recognised by IVR for specific incoming numbers
38. POB/Oppy session stuck in a phantom 'dummy call' causing no dial tone
39. Twilio blocking outbound international calls due to destination number risk classification
40. Twilio POB creating duplicate outbound tasks and repeatedly calling same Salesforce opportunities
41. Scheduled callback (Rückruftermin) recorded but not executed in Twilio
42. Twilio Power Outbound slow call setup and repeated follow-up (FUP) assignment
1. Cloudya/Nfon account, extension and phone-number provisioning failures
Solution
Access and calling were restored by reprovisioning or re‑linking Cloudya/NFON identities and reprovisioning user telephony identities, internal extensions, and external DIDs. In multiple cases login failures were traced to accounts with no assigned telephone number (numbers had been auto‑deleted after prolonged inactivity, commonly ~three months); assigning or restoring the previous IU/fixed‑line number to the user restored Cloudya login. Accounts that failed activation or triggered expired/looping forgot‑password flows were recreated or had passwords reset and fresh activation/forgot‑password links issued; activation and password emails were frequently located in Outlook Junk/Spam or delivered to obsolete addresses. Deleted or recycled numbers were reassigned or replaced when possible; numbers restored on the provider side reappeared in Cloudya and generated notification emails. Site‑bound external numbers were corrected by assigning the proper physical site. Missing endpoints and apps were reprovisioned and the Cloudya app was assigned in Azure AD/Intune/Company Portal (a centralized Company Portal deployment of Cloudya v1.7.0 replaced ZIP installers and removed persistent update prompts). Desktop and web clients that showed only Settings, a white screen, or a missing dialer were restored by signing out/in or by recreating the user profile; assigning a missing internal extension restored full desktop UI in several reports. Queue and function‑key discrepancies were resolved by adding users to the correct queue and applying a reference user’s function‑key/profile (changes typically propagated within minutes, occasional 5–10 minute delays were observed). Fax devices were restored by binding the device identifier and PIN in Cloudya Settings → Fax. Voicemail/mailbox PINs were set using internal extensions or temporary mailbox passwords and activation details were sometimes delivered via internal portals. Inventory and provider discrepancies were resolved by auditing NFON forwarding lists against the Smartphone Inventory and provider records and disabling provider‑side forwardings that remained active after inventory deactivation; NFON display names and ownership records were updated where they did not match. Device investigations verified NFON numbers and analog‑to‑network adapters (for example Cisco SPA112) and checked adapter online status; where adapters reported online but calls still failed, alternative solutions (for example GSM emergency‑call devices) were proposed. Mobile call‑forwarding limitations (forwarding to a single number) were noted. Key‑user and monitoring access issues were fixed by adding staff to the Cloudya Keyuser group. Provider incidents (including NFON/Cloudya outages and B2B/Twilio inbound routing interruptions) were tracked as vendor outages; NFON/Cloudya applied configuration and routing fixes and confirmed inbound reachability with test calls. In one persistent case a soft client continued to miss inbound events after reinstall and was resolved operationally by finalizing migration to Twilio as the long‑term remedy.
2. Shared external numbers and Cloudya call-queue / ring-group setup limitations
Solution
Affected staff were added as members of the relevant site-central hotlines and call queues, and queue-login function keys (softkeys) were created or copied from a working profile and assigned or removed from user profiles as appropriate; the new softkeys became visible on endpoints after users logged out and back in. Investigation determined Cloudya call queues and ring-groups delivered calls only to the user’s configured primary device; changing a user’s primary device to the intended endpoint (mobile or desktop) restored audible and visual ringing on that client. Where callers needed voicemail if no agent answered, a shared mailbox was added as the queue’s final destination so callers could leave messages. User account metadata errors (incorrect email addresses and misspelled surnames) were corrected and restored callback visibility and notification delivery. Hotline provisioning required an existing site-level DID/number block; missing number blocks were allocated and provider/contractual steps completed before hotlines were created. Outgoing calls continued to display the account’s assigned extension/number-block; changes to displayed outbound numbers required adjusting that assignment. Misrouted campus calls were often traced to public listing or forwarding of central service numbers; in a separate class of incidents, incomplete or misconfigured inbound Twilio/DS call-flow or skill logic caused hotlines to be delivered to unintended recipients and were mitigated by disabling the Inbound skill until the DS flow was completed. Provisioning of third-party trunks/accounts (e.g., Vonage) failed when required reference-user details were not provided; supplying the required reference user or using an alternative provider (Twilio was recommended in some cases) was necessary to complete provisioning. Where users could not install the Cloudya client because they lacked local administrator rights on Windows devices, the Cloudya app was assigned to the device via Microsoft Intune (Company Portal) so the app could be installed without local admin access. For requests to let remote or home-office staff answer emergency hotlines, consistent resolutions included adding remote staff as queue members or adjusting queue/forwarding routing so remote endpoints could receive calls. A pilot for multiple DS locations was proposed for broader hotline coverage. Investigation of webinar-delivery requirements confirmed Zoom webinars required a separate webinar license and no free webinar licenses were available.
3. Vonage account roles, call-group membership and dialer/browser integration failures
Solution
Incidents were resolved through a combination of account/permission fixes, CTI/CRM mapping corrections, client-state resets, vendor patches, and device-level remediation. Provisioning fixes: missing or unlinked Vonage/Twilio accounts were created or reactivated, invitation/password flows completed, expired credentials reset, accounts unlocked, and correct Vonage/Twilio roles and reference-user mappings applied and recorded in Salesforce. CTI/call‑center fixes: integrations were reinitialised or toggled, stale Salesforce identifiers and mappings were corrected, and legacy or duplicate profiles removed so agents could join queues and regain Phone Button and call‑logging. Browser and client‑state faults: server‑side session terminations, user sign‑out/sign‑in, browser/page reloads, in‑page “Clear Issues”, or reinstall/reinitialise of the Vonage WebRTC softphone and extensions cleared many transient UI errors; legacy extension remnants were removed where present. Network, signalling and provider token faults: voice clients/services were restarted, valid access tokens reissued, and vendor SDK/telephony patches applied (including Twilio caller‑ID and signalling fixes); when UI controls or caller‑ID entries remained missing, call identifiers (TaskSid/CallGUID) were captured and escalated to vendor support. Twilio Flex/TaskRouter observations: several channel‑switch failures from chat to outbound voice were investigated by capturing TaskSid and worker_attributes for escalation; a number of those tickets were closed with no documented technical remediation. Audio and device faults: microphone permissions, incorrect device selection, OS/browser audio drivers, or defective headsets/USB dongles were common causes — microphone access and correct I/O selection were restored, Chrome/Edge and audio drivers updated, audio devices reinitialised and faulty headsets/dongles replaced. Bluetooth interoperability problems often fell back to wired operation until vendor driver/stack updates were applied. A metallic hang‑up noise was removed after a vendor-provided Vonage client configuration change and client/headset restart. Windows 11/Dell symptoms (screen/monitor turning off during calls, mute‑button flicker) were linked to power‑management and driver interactions and were resolved after driver or vendor tool updates, power‑profile adjustments, or USB‑dongle replacement. SIP trunk signalling variability and reporting misclassifications were escalated to carriers or product teams and resolved by adjusting trunk settings or correcting subscription/forwarding targets. Agent‑status change failures, voicemail recording failures, and contaminated internal contact lists were escalated to vendor support; some of those investigations had no recorded remediation and were closed without technical resolution.
4. Provisioning corporate SIMs for WhatsApp Business and PC-based SMS access
Solution
Support verified Inventory360 and contract/line records and used the vendor-order Automation for Jira or direct distributor (Conbato) requests to order, reissue, replace, or reassign physical SIMs and eSIM profiles; order confirmations were sent to the contract owner and Mobilfunkbestellungen@iu.org. Where eSIMs were inactive or unusable, carriers issued new activation profiles/QR codes (sometimes emailed) and technicians either obtained replacement eSIMs from the distributor or restored service by removing and re-adding the eSIM profile on-device; both approaches resolved multiple incidents. When internal automation blocked processing, requesters were routed to create a New mobile device / mobile-order ticket and select 'Only mobile' so the distributor workflow could run. Service loss was traced to incorrect Inventory360 or contract records (for example lines marked auf Lager or missing from Inventory360); service was restored after correcting Inventory360/contract status and having Conbato reactivate or reassign the number. If a number had already been returned to the carrier and could not be recovered, a new number and SIM/eSIM were issued. On iOS devices that lacked the Add Cellular Plan option because a line had no active contract, the issue was resolved by re-establishing the contract or providing a physical SIM. Unused physical SIMs were decommissioned by requesting the distributor move the number to a blank SIM or release it, physically destroying the SIM card, and confirming release with the provider. Non-technical constraints were recorded: IU policy and HR/legal reviews sometimes prevented transferring corporate mobile numbers to departing employees, numbers were routinely blocked/unavailable for a period after offboarding, and porting private numbers into an IU mobile contract was sometimes refused by the mobile provider. Carrier-specific transfer requirements caused additional delays; for example Telekom required a completed contract-transfer form including the provider framework contract number (Rahmennummer), and missing Rahmennummer or incomplete paperwork delayed or prevented number/contract transfers. When numbers changed, related external services (for example Twilio caller‑ID) were updated or handed off to the responsible service owner for caller‑ID verification and configuration. Additional observed resolution: some work phones already had an eSIM that could be moved between devices; confirming the eSIM existed on the company handset and transferring the eSIM profile restored the work line without issuing a new profile.
5. Temporary mobile data increase to 'unlimited' via Telekom self-service portal
Solution
Temporary increases for the current billing month were completed via Telekom's self‑service portal (pass.telekom.de). These changes were performed from the company phone while on the Telekom LTE network so the portal detected the correct tariff; via the portal users booked additional data volume or applied a temporary 'unlimited' setting for the billing month. For accounting and provisioning, approvals/orders were sent to the external provider Conbato and to internal distribution addresses (Mobilfunkbestellungen@iu.org and cpg‑requests). Automation for Jira produced order emails that contained requester and contract‑holder details, service type, requested delivery date (when provided) and instructions to send confirmations to the contract holder. No delivery address was required for tariff‑only changes, and tickets were marked done after carrier confirmation of the order. Permanent increases to the standard monthly allowance were handled by IT on the user's behalf: IT requested any required manager approval, submitted the contract change to the carrier/contract team, and the carrier processed and activated the new package; users were CC'd on the carrier confirmation e‑mail. Cases sometimes presented as unchanged quotas in the Magenta‑App/eSIM until carrier provisioning completed; IT verified carrier records and informed the user after activation. As a short‑term workaround, purchasing day passes via the Magenta app was suggested; when travel was frequent and repeated day passes proved more expensive, users sometimes opted for a permanent contract increase instead. Occasionally tickets were automatically closed by Automation for Jira before a definitive resolution was recorded; in those instances staff reopened or recreated the necessary order/confirmation workflow and completed the carrier order before closing the ticket on carrier confirmation.
6. Cloudya desktop window positioned off-screen on multi-monitor setups
Solution
Cloudya desktop windows that were positioned on a secondary or disconnected monitor were restored by moving them back to the primary display (for example via the taskbar preview 'Maximize' action) or by reconnecting the external monitor. Users who experienced a missing or unresponsive dialer regained calling functionality after signing out of the desktop client and signing back in; unresponsive dialogs sometimes persisted until the app view was refreshed, scrolled, fully exited and restarted, or the program was reinstalled. In incidents where the desktop client was missing or unresponsive after installation or a password reset, the client was reinstalled and credentials were reset. The Cloudya web application was used as a temporary fallback while Company Portal or connectivity state was resolved. As an alternative telephony mitigation when the desktop app would not open, support configured temporary call forwarding to another user/phone (for example to a colleague or mobile number) and later restored the user's preferred mobile number as the call target; the user subsequently confirmed Cloudya access was restored. Intermittent internet connectivity and display reconfiguration were noted as likely contributing factors in several cases.
7. Twilio EKG showing orange warning triangle and WhatsApp messages stuck loading
Solution
Incidents were triaged into Conversations/WhatsApp UI problems and telephony/TaskRouter anomalies; resolution actions spanned client, routing/database, network, and engineering teams. Key observed fixes and actions included:
Collecting TaskSIDs, reservation IDs, HARs and full client console/network logs materially accelerated root-cause identification. Recurring incidents were closed after combining reservation/task cleanups, routing/database corrections, engineering channel/SDK patches, client hard reloads/cache clears (and disabling offending plugins/channels as short-term workarounds), network/bandwidth remediation, and Twilio-side fixes to tokens, workers, permissions, or database state.
8. iPhone not receiving incoming calls due to unconditional call forwarding
Solution
Incoming call delivery was restored after clearing unconditional/capable call forwarding/diversion entries or removing conflicting SIMs. In incidents where carrier‑level forwarding persisted across device resets, carrier unregister/GSM MMI codes were applied (examples recorded: ##21# to cancel unconditional forwarding; ##002# to cancel all forwarding/mailbox). In other cases forwarding or automatic-answer had been set on a handset or on the originating line (including forwarding to a colleague or to an automatic attendant) and was removed on that device or line. On dual‑SIM phones the unregister/clear action and any forwarding checks were applied per affected SIM/line. In one instance calls were blocked because an old physical Vodafone SIM remained in the device after switching to a Telekom eSIM; removing and disposing of the old physical SIM restored incoming calls. Support also used an internal test number (3311) to verify call routing from the affected mobile while troubleshooting. Conditional forwarding entries were preserved where they were intentionally required for later reconfiguration.
9. Provisioning Cloudya accounts and assigning geographic DIDs for external users
Solution
Technicians provisioned NFON/Cloudya accounts and assigned internal extensions and geographic or virtual DIDs consistent with each organisation’s numbering plan after confirming site or requested area code. When local DIDs were unavailable they ordered geographic number blocks from NFON, submitted required NFON documentation, and applied provisioned numbers to Cloudya accounts. For requests to publish a number on business cards or in the IU Store form technicians provided a dedicated corporate/virtual DID and updated the corresponding Workday contact and printing records so private numbers were not used on Visitenkarten. Where MS Teams or Zoom presentation was required technicians verified the actual telephony platform in use and either provisioned the DID on that platform or coordinated with platform owners to enable external‑number presentation; unnecessary Cloudya provisioning was removed to avoid duplicate accounts and billing. Usernames were set to users’ email addresses when appropriate and shared or generic/team usernames were created on request; cost‑center or charge codes were applied when required. Account access was enabled via the standard activation workflow and password set/reset emails; after activation assigned numbers were visible in the Cloudya web app and PINs/numbers were delivered via IU Safe secure email. The Cloudya/Claudya client package was assigned to the Company Portal for softphone installs and eSIM/SIM provisioning or handset procurement was coordinated for mobile use. Outbound caller‑ID and Contact Pad/Phone Pad anomalies were resolved by verifying users’ configured outbound numbers, assigning the correct outbound number, and removing outdated entries so only appropriate caller‑ID options remained. Departmental or role numbers were configured as call queues with routing to primary users and overflow to departmental or backup numbers after the configured timeout. For users requiring forwarding to private numbers only during working hours technicians configured time‑based routing/forwarding where supported and clarified voicemail behavior: voicemail was recorded on the provisioned corporate/Cloudya service when voicemail was configured as the post‑forward fallback, and routing was adjusted so unanswered forwarded calls hit Cloudya voicemail (or the selected failover) after the configured ring/timeouts. For legacy corporate landline enquiries technicians confirmed whether the legacy number remained operational and suitable for external publication, then repurposed the legacy number or provisioned a new corporate DID as requested. For decommissioning of legacy telephony services (for example Intracall and associated NCTI servers) technicians maintained service availability until scheduled inbound‑number migrations to the replacement platform were completed, applied required security updates and transferred address ranges after the migration, and confirmed there were no active users before shutting down VMs and decommissioning the service. Before provisioning technicians checked for existing Cloudya accounts/extensions to avoid duplicates and requested a reference user when required. For physical office telephony requests they clarified whether desk phones, mobile handsets, or a mixed solution was required, confirmed site/room and quantity for mobile handsets, coordinated procurement and SIM/eSIM activation, and enabled voicemail/answering‑machine services in Cloudya or configured external answering‑machine requirements as requested. When printer or other office device records were missing they liaised with facilities/asset management to locate or register devices and coordinated platform‑specific setups. All assignments and status updates were recorded and communicated to requestors through the appropriate channels (for example Microsoft Teams messages, IU Safe secure email).
10. No audio on Cloudya desktop/web calls while local audio devices test OK
Solution
Investigations identified two primary root-cause families: network-level media-path problems and client-side device/permission/firmware issues. Network cases included RTP/UDP media flows being modified, blocked, or degraded by on-prem routers, NAT features or ISP/office network conditions (examples: SIP-ALG interference, NAT handling, or elevated latency/jitter/packet loss). These incidents were diagnosed from call logs and media-level traces and by reproducing calls over a cellular hotspot or an alternate network; calls recovered when the router/NAT features interfering with RTP were removed or when the media path or network segment was corrected, and other incidents improved after users moved to a different network (home/hotspot) that did not exhibit packet loss or high jitter. Client-side cases involved incorrect device selection or enumeration, OS/browser microphone permission blocks, driver or connection faults, and headset firmware defects; those were resolved after the telephony device became exposed to the OS/browser (re-pairing/reconnecting, reinstalling drivers), the correct Cloudya audio device was selected and the Cloudya client or browser was restarted so the device became selectable. Vendor utilities and firmware/driver updates (for example Jabra Direct/System Update) resolved several instances of severe static, interference, or noise. Troubleshooting relied on Cloudya’s built-in audio-test behavior, device-selection checks, diagnostic logs, hotspot/alternate-network tests and call-recordings or network traces to distinguish network-path degradation (latency/jitter/packet loss or RTP modification) from local device or firmware/driver failures. Swapping a workstation without improvement was treated as an indicator of a network- or path-level problem rather than a local endpoint fault.
11. Unexpected outbound calls from Twilio triggered by scheduled tasks on CRM Opportunities
Solution
Incidents were mitigated by removing active Twilio call tasks and stopping the dialer/dialing workflows that caused re-presentation. Specific actions that stopped customer-facing calls included deleting queued Twilio Tasks from the Twilio Preview interface; removing unnecessary Planned FUP entries from Opportunity Schedules and inbound-activity Schedules; closing duplicate CRM/MS accounts that generated extra attempts and confirming their Twilio tasks were closed; and temporarily marking an affected Opportunity as “Lost” so the CRM deleted the currently active Twilio task before reactivation. For locations where the dialer retriggered previously attempted Opportunities after Twilio showed no active calls, staff temporarily stopped or disabled the dialer for the affected location. Reporters were also advised to reapply or set opt-in/DOI status on the MS Account and to confirm removal from POB/calling lists when an EEB/no-call was recorded; in at least one reported case an EEB/no-call existed but the contact remained on the POB list and the ticket was closed without documented remediation. Support requested concrete TaskSid examples from reporters to investigate task-level routing and scheduling. During triage development diagnosed scheduling/dialing handling errors: scheduled Twilio calls were not being cancelled during or after inbound/DialPad calls, the POB/Push flow could trigger repeated or parallel dialing attempts, scheduled FUPs could execute earlier than their scheduled date, and permission regressions prevented users from removing unexpected tasks. A permanent code fix for the scheduling and dialing logic was under development at the time mitigations were used.
12. Twilio login/console failing with JavaScript TypeError on Worker records
Solution
Incidents were traced to a small set of account-level causes and one problematic intranet link. Multiple cases where Twilio initialization broke were resolved after missing Worker metadata (notably team/location assignment and skill attributes) was restored; copying the missing attributes from a colleague’s Worker at the same site reinstated the required metadata and removed the JavaScript initialization error and UI failures. Several stalled login/salesforce sync flows were caused by account configuration interacting with browser SSO/session state (commonly Chrome with an existing Salesforce session); a specialist updated account settings during short remote sessions which restored normal sign-in and Salesforce sync. A distinct class of admin preview/sync failures was caused by a legacy intranet “Chihuahua” link: replacing the legacy link with the standard flex.twilio.com workflow and applying the previously used pinned workaround restored preview mode and Salesforce sync for affected admin accounts. Support also restored or enabled individual Twilio access where accounts or authentication were disabled and re-added users’ work numbers to Twilio’s Select Caller ID when those numbers were missing. Cookie-blocker extensions were investigated and were not found to be the root cause. Separately, at least one report where Twilio did not display the linked Salesforce account on the Case (Salesforce showed the correct link) and outgoing calling failed was escalated to the Twilio specialist team; Task SIDs were collected and the vendor was engaged for direct investigation, but no technical fix was recorded in the ticket before it was closed.
13. Avaya Communicator prompted for configuration and prevented agent registration
Solution
Support interventions restored Avaya Communicator sign-in and Agent Desktop telephony in the observed incidents. In multiple cases a remote session reopened Communicator, domain/connection fields were re-entered and saved, the application was restarted, and Agent Desktop subsequently registered and returned the agent to Ready. In other incidents technicians re-established Communicator authentication after a password/login timeout, after which Agent Desktop again received inbound calls. One ticket recorded the phone/service displaying 'Network Currently unavailable' and technicians advised reconnecting VPN, checking network connectivity and for credential sign-in prompts, but no final resolution was documented for that incident. Several tickets recorded only that technician assistance restored Communicator login and telephony without step-by-step details.
14. Teams blocked international outbound calls due to missing PSTN/Calling-Plan license
Solution
International outbound call failures were resolved by provisioning a PSTN calling capability for the affected Teams accounts. In reported cases the Microsoft Calling Plan international add‑on (or equivalent PSTN license) had been assigned to the user's Teams voice subscription, which enabled international dialing. In environments using third‑party trunks the same symptom was resolved after the carrier/trunk (Cloudya/NFON, Vonage) was configured and authorized for international outbound calls. Where Teams‑based PSTN calling was not available in the tenant, the affected users were migrated to Cloudya as an alternative telephony service; provisioning Cloudya required creating a reference user account to enable telephone services on the Cloudya side.
15. Outbound Cloudya/NFON numbers flagged as 'Fraud/Spam' on recipient Android devices
Solution
Investigations first collected user reports and screenshots and checked the scope of affected recipients and networks to distinguish carrier/provider reputation issues from device-level filtering. Two distinct outcomes were observed. 1) For carrier/recipient‑side reputation flags, providers (for example NFON) investigated number reputation, re‑provisioned or replaced the DID, and completed caller‑ID/business verification with carriers and Google where applicable; after re‑provisioning and verification the carriers/Google removed the spam flag and the label stopped appearing. 2) For device‑level filtering (notably Samsung Smart Call), analysis showed the smartphone feature marked the number as spam/suspicious; this behavior was not controllable from our systems or the provider and was communicated to requesters as a device‑side limitation. The resolution path therefore depended on the root cause determined during triage (carrier/provider reputation vs. recipient device filter).
16. Inbound callflow routing caused by outdated service/opening hours
Solution
Callflow and holiday schedule records were corrected on the affected telephony platforms so routing matched declared service availability. Specific fixes included: • EM Sales inbound number (493031198720): opening hours were set to Monday–Friday 07:00–19:00, weekend shifts were removed, and the changes were saved and applied so callers were no longer routed during unintended times. • Schools phoneline (Avaya): a Holiday table was created, a closure was scheduled for 20 Dec 2024 16:00 to 2 Jan 2025 08:30, and the Christmas greeting was applied so callers heard the closure announcement. • FS Studsek (Vonage): a holiday entry was added in Vonage (assistance logged 2024-07-19) so the closed/holiday announcement and routing took effect. • IU Akademie (Vonage Interaction Architect): the Interaction Architect availability/schedule for the “SO Upskilling *82” skill was updated to Monday–Thursday 09:00–17:00 and Friday 09:00–16:00 (change applied on 24/25), and the ticket was closed. Several tickets contained no step-by-step implementation details; the recorded changes were limited to the updated schedules and applied holiday entries on the respective platforms.
17. Avaya reporting does not retain or expose per-direct-number call-volume data
Solution
The request was escalated to the specialist telephony/reporting team, who confirmed the reporting system only retained and exposed aggregated call-centre statistics and did not store or surface per-direct-line (individual DDI/extension) call-volume data. Consequently, historical call-volume reports for extensions 7109 and 7119 from the requested period could not be produced, and the requester was informed that the data was unavailable.
18. Twilio blocking outbound WhatsApp when WAPP consent record appears stale
Solution
Multiple distinct root causes produced the same observable failures; incidents were resolved as follows.
Across cases, resolution actions matched the root cause: consent-store reconciliation, establishing an active chat channel for the Task, template-management corrections, completing the automatic post-switch/reply flow, using approved templates for 24‑hour-window sends, WAPP service mitigation, or correcting operator reply controls. After the applicable remediation, Twilio accepted previously blocked outbound and inbound WhatsApp messages and conversation histories populated.
19. Inbound call qualification failure caused by recent code regression
Solution
The issue was escalated to the specialist team, who identified a small regression introduced in a recent change. Developers implemented a code fix for the bug and deployed it to production. After deployment the call-qualification functionality was restored and agents were able to qualify calls following inbound calls.
20. One-off PBX call forwarding activation for a specific DID
Solution
Support first verified whether the affected number or user had an extension assigned in the corporate PBX (Cloudya). For DIDs/extensions managed in Cloudya, support configured immediate call forwarding or rerouted the affected DID(s)/extension(s), obtained any required approvals (for example via an Application Request), activated the new routing, and verified behavior by placing test inbound calls. Region-level requests were handled by implementing Cloudya routing for the shared region phone (for example a Region Ost setup covering multiple sites) so staff could access region phones when a site was unstaffed; these changes were approved as required, activated, and tested. Where appropriate, numbers were re-routed to the correct customer-service region or queue or escalated to the specialist team. For provider-managed numbers that were not yet enabled for inbound calling (for example Twilio-managed landlines awaiting DS inbound activation), support applied Cloudya forwarding rules to route the provider number temporarily to the user’s mobile. For requests referencing numbers not managed by the PBX (for example personal mobile numbers with no Cloudya extension), support confirmed no extension existed, informed the requester that the PBX could not forward an external mobile number, and closed the ticket without applying changes. Support also confirmed whether users preferred self-service via the Cloudya portal or support-assisted changes and coordinated related offboarding decisions (email/autoreply retention, account deletion, hardware return).
21. Twilio outbound call failure 45301 caused by incorrect or misformatted destination number
Solution
Two classes of Twilio outbound failures that returned 45301 (and one 4503) were identified and resolved. In incidents caused by invalid or misformatted destination numbers — including number formats that caused Twilio Lookup Service failures — destination numbers were normalized to E.164 (correct country codes and removal of extraneous characters) and an internal bugfix addressing the Lookup behavior was deployed; subsequent outbound calls connected and 45301 failures did not recur. In separate incidents where calls were immediately dropped or PowerOutBound tasks stopped arriving despite basic network checks, failures were traced to contact-dialer account misconfigurations (admin/agent accounts with no required skills assigned); assigning the required skills restored routing and eliminated the 45301 failures.
22. Twilio Caller ID registration blocked when service mobile cannot receive verification calls
Solution
Support obtained the service or corporate number in international format and registered it as a verified Caller ID in Twilio/Twilio Flex; this step resolved many cases where outbound calls showed Twilio defaults, other users’ numbers, or random numbers. When automated Twilio verification callbacks failed because the service handset, Cloudya/NFON provisioning, or Twilio admin permissions were unavailable, verification was paused until provisioning/permissions were restored; where callbacks were impossible verification was completed via a short joint Teams/PSTN verification call or by entering the Twilio verification code on the user’s mobile. Manual re‑entry, deletion/recreation, or direct population of the Caller ID field in Twilio cleared greyed‑out UI behaviour and resolved cases where Dialpad/Click‑to‑Dial closed immediately or would not initiate calls. Accounts that contained an obsolete or unverified number produced intermittent outbound failures and the Twilio error that the outbound number was not a verified caller ID; these were resolved by removing the outdated number and assigning the correct verified number. Twilio account and power_outbound settings were reconfigured when necessary; in cases of random caller‑ID switching a Twilio engineer applied account‑side changes so only the user’s own Caller ID appeared. For Virtual Campus/B2C endpoints personal mobile numbers were not assigned and users were directed to use the provided site/VC ID so calls and transfers routed and accepted correctly; specialists likewise advised selecting the appropriate location ID for B2C accounts. For Dialpad presentations support enabled the verified number in Twilio, set the appropriate Caller ID in Dialpad, and updated site/contact numbers when they were outdated. Cloudya/NFON access and provisioning (passwords and provisioning tests) were restored before completing Twilio verification; where an internal administrator lacked Twilio admin permissions verification and configuration were completed using or by granting an account with Twilio admin access. Several tickets were closed without completion when required approver information or phone details were not provided; those requests were completed only after the approver (team lead or cost‑center owner) and phone details were supplied. In at least one instance a Twilio login/authentication problem was resolved by re‑authentication before Caller ID configuration could be completed. Additional diagnostics captured after these tickets showed that some outbound failures returning error 45391 correlated with transient network instability during call initiation; those incidents were investigated with network tests (including wired LAN tests) and by collecting TaskSIDs and screenshots when TaskSIDs were not initially present to enable deeper Twilio-side debugging.
23. Outbound calls not processed when Power_Outbound enabled before its scheduled start
Solution
Investigations identified that Power_Outbound had a configured scheduled start and would not properly process outbound calls or scheduled callbacks when it was enabled before that start time. In multiple incidents outbound calling and callback processing were restored either after the service reached its configured start time or when support re-enabled/fixed the Power_Outbound/PowerOutbound service. Observed Twilio symptoms included calls remaining in the Available queue and not being pulled, the Reserved Calls UI field failing to increase or reflect processed calls, and manual adjustments to reserved call counts having no effect. Recorded remediation actions that coincided with resolution included re-enabling Power_Outbound and allowing the service to enter its scheduled window; one support attempt also restarted the Twilio integration and cleared the browser cache. A Dialpad-specific incident with ~2 rings then audio loss and a frozen call-preview UI was resolved by re-enabling Power_Outbound; support additionally arranged assignment of a personal phone number to reduce reliance on a shared number. Several related tickets were closed as resolved without documented technical steps.
24. Vonage WebRTC add-on missing from webshop after platform update
Solution
The issue was caused by a prior Vonage platform update that removed the standalone 'Web RTC for Vonage CC' add-on. Vonage had merged the WebRTC functionality directly into the Contact Pad, so the separate webshop add-on was no longer published or required. The webshop showing only 'Screen Lock' was expected behaviour, and the add-on had not been renamed to 'Balto Vonage'. The user was informed and the ticket was closed.
25. Twilio POB inbound call preview behavior varies by opportunity stage
Solution
Investigations distinguished two separate Twilio-related issues and one planning request. For Twilio POB call-routing behavior, the team determined the behavior was intentional: incoming-funnel calls (Leads, BOBs, applications in status "Eingang") were configured for immediate delivery without an agent preview to prioritize rapid handling, while calls tied to later opportunity stages were routed with a preview/selection menu so agents could prepare; the configuration rationale and resulting behavior were communicated to stakeholders. For inbound Oppy failures, support executed standard troubleshooting (client restarts and checklist), escalated to developers, and developers contacted Twilio; the issue was identified as a known vendor-side defect with no local fix available and the case remained pending a Twilio/developer patch (a separate instance had temporarily resolved after issuing a new laptop, but that was not confirmed as a general resolution). For the DACH Vonage routing request, stakeholders met to gather requirements and agreed to evaluate routing options based on Opportunity Status (example: Opportunity Status "Definite" → forward to Studierendensekretariat); an implementation approach and timeline remained in planning pending that evaluation.
26. Provisioning of specific Twilio phone-number requests
Solution
Requests to provision, locate or register specific phone numbers or Twilio IDs/callback numbers were handled according to the system and organisational policy. When a requested number existed in Twilio it was assigned to the requester’s Twilio account. For B2C staff, policy prevented issuing individual Twilio/Dialpad IDs; those numbers were registered to the standardized shared Dialpad ID “PreSales_DS_DeinStandort” and requesters were informed. Inventories in NFon/Cloudya and Vonage were searched for provided numbers; located numbers had ownership, routing and billing/PO attribution documented (for example a toll‑free number found in NFon was identified as a DS Sales number). Numbers not found in provider inventories were escalated to the Twilio team for further Twilio-side search. Requests to create separate lines in Vonage were implemented by provisioning a new line and applying the same forwarding configuration as the existing line while leaving the original line unchanged. During migrations or transitions inbound routing was configured so specified numbers were routed from legacy providers (such as Intracall or Questnet) to Twilio while other numbers remained on the legacy platform. Requests to add Twilio callback IDs were completed by adding the callback number/ID to the user’s Twilio configuration and recorded in ticket comments; occasionally ticket resolution fields contradicted completion comments (e.g., showing “Won't Do” despite a completion note) and ticket comments were relied on as the operational record. All unresolved cases or unusual findings were forwarded to the specialist telephony team for confirmation before ticket closure.
27. Unexpected callbacks traced to worker phone-number mapping and routing logic
Solution
Support investigated recurring misrouted calls, callback delivery errors, lead‑assignment mistakes, and call‑history visibility gaps across Twilio, Vonage and contact‑center tooling and identified several distinct root causes with targeted remediations or investigative actions. Findings and confirmed remediations included:
Investigative artifacts used across incidents included Twilio and Vonage provider call logs and TaskSids, IVR/callflow mappings, contact‑center worker/queue records, Salesforce agent‑state and backlog checks, Power Outbound profile resets, recipient spam‑status checks, and Dialpad integration/error logs where available. Confirmed remediations consisted of correcting routing and IVR fallback logic, updating provider number/stub assignments and call‑forwarding endpoints, resetting user/outbound profiles, adjusting Power Outbound lead routing, advising on spam‑label handling and using alternate mobiles for urgent calls, engaging carriers/providers for caller‑ID reputation investigations, and escalating dashboard indexing/configuration faults. A small set of cases (notably the Dialpad Opportunity association failure, some round‑robin imbalances, and the channel‑toggle inbound case lacking trace identifiers) required further vendor or operations investigation and were retained for additional tracing using TaskSids, timestamps and provider logs.
28. Perceived Power Outbound outage caused by low task/applicant volume, not platform failure
Solution
Support analysis found these incidents were typically caused by empty or starved task queues rather than a single platform outage. Root causes identified across investigations included genuinely low or seasonal lead/opportunity volume, colleagues having already processed queued tasks (task counters showing 0), and upstream ingestion or task-push failures (partial imports, stale or missing leads, or ingestion backlogs). Dialing and routing behavior were sometimes impacted by site/server load or Twilio dialing delays; a subset of reports included misassigned outbound caller IDs. No consistent Twilio or Dialpad platform exceptions or error codes were observed; one case recorded a persistent Dialpad client error "try again or contact support" that did not clear after a client reboot or basic connectivity checks.
Support actions and observations that led to restorations included verifying outbound-channel flags (for example, Voice_Outbound and Power_Outbound), checking Twilio queue activity and agent-to-site assignments, and inspecting dialer configuration such as the "calls pulled" parameter (support recommended increasing the configured calls-pulled value — e.g., from 30 toward ~50 — although increasing this value did not always immediately trigger dialing). Support also ensured relevant Salesforce applicants/records were active and assigned; re-ingesting or scheduling tasks and clearing ingestion backlogs correlated with recoveries. Where dialing or number anomalies were configuration-related, outbound-number assignments and routing were escalated to specialists and CRM/import teams engaged for partial-import or push-backlog conditions.
Applied mitigations recorded in tickets included using a live-activity visibility account and designating an internal Twilio key user as a backup monitor, temporarily reallocating agents or reverting to manual telephony while queues refilled, and monitoring until activity resumed. Some incidents had no definitive fix recorded at triage and were resolved after additional troubleshooting calls or when inbound task/activity volume returned.
29. Cloudya (CARE) web app full unavailability due to provider-side outage
Solution
Support identified the incidents as provider-side Cloudya/CARE service outages and escalated them to the platform/provider operations team. The provider worked the outage and restored service; support then verified access across affected accounts. After restoration, support completed or retriggered password-reset flows and resolved lingering client issues (in one case via a Microsoft Teams remote support session when the Cloudya app would not open). During the outage, no user-side remediation reliably restored service; issues such as missing password-reset emails and failed sign-ins were resolved only after the provider restored availability.
30. Cloudya upgrade request blocked by lack of administrative credentials
Solution
The ticket was escalated to the Cloudya specialist/packaging team, who initiated and completed the rollout to Cloudya 2.0.0 on behalf of the requester. The specialist asked the user to verify functionality after deployment and closed the ticket once the rollout was finished.
31. Salesforce telephony feature unavailable due to missing permissions/approval
Solution
Support clarified their scope (they only handled account creation) and responded by providing the requester with the correct channels to request the necessary permission and service changes: the Permission Change portal, the Bug Reporting portal, and the Twilio Requests portal. The request was left awaiting approval after the requester was directed to those portals.
32. Twilio initial-login frontend TypeError from null addEventListener during first-time setup
Solution
The Twilio initial-login/configuration frontend code was patched to handle the null reference before attempting to call addEventListener. The fix was deployed to the affected environment and the user retested the onboarding flow; the initial setup and UI functionality worked after the deployment.
33. Twilio sign-in failure due to missing password-reset email — Omnichannel-handled account recovery
Solution
Internal IT confirmed they could not perform Twilio account or password recovery for Twilio-managed accounts and directed affected users to the Omnichannel team. Omnichannel handled account and password recovery after users submitted requests via the Omnichannel support form/link. Separately, some login failures that involved a non-working 'Forgot password' flow or missing reset emails were resolved by having users sign in via the official Twilio login link using their existing Twilio credentials rather than triggering a password reset.
34. Verifying Vonage license for call-tracking / virtual-number campaign attribution
Solution
The inquiry was escalated to the Vonage specialist team and the requester was contacted via Microsoft Teams to explain the current status. The specialist team had not been able to confirm within the ticket whether the existing license included virtual-number call-tracking/campaign-tracking capabilities, so the request remained with vendor-specialist review and clarification of licensing/features was documented for follow-up.
35. Vonage account provisioning and Salesforce record linkage for new users
Solution
Vonage accounts were created and the account details were recorded and linked to the users' Salesforce records when a valid reference/comparison user was available (provisioning actions were documented in Salesforce — e.g., Haynert, Daniel on 2025-07-22 09:37). When a ticket lacked a required reference user, support requested one from the requester and proceeded to create and record the account after it was provided. When the supplied reference/comparison user did not have a Vonage account, support determined the request originated from a department that typically used Cloudya and therefore did not create a Vonage account; the need for Cloudya was noted in the ticket. All outcomes and product decisions were documented in Salesforce.
36. Calls terminating ~2 seconds after answering for a single external contact
Solution
Specialist investigation found no telephony-system or configuration errors and concluded the symptoms were consistent with unstable or poor network/connectivity on the applicant's side. No server-side or platform changes were applied; the case was closed pending the applicant restoring a stable connection and retesting.
37. Intermittent DTMF tones not recognised by IVR for specific incoming numbers
Solution
The incident was escalated to the telephony specialist team. They inspected the DTMF-transfer setting for the affected extension (studentsoffice ext. 855) and confirmed DTMF transfer was enabled. Call/IVR logs and provider traces were reviewed but showed no systemic or reproducible fault; local tests of DTMF detection succeeded. The outcome documented the configuration as correct and the issue as intermittent with no immediate root-cause identified, and the case was left under active monitoring for recurrence with provider-trace capture if the problem reappeared.
38. POB/Oppy session stuck in a phantom 'dummy call' causing no dial tone
Solution
A technician determined the symptom was a stuck/phantom "dummy call". The user executed the documented procedure to clear/unhang the dummy call (referred to as "Festhängen in sog. Dummy-Calls"). After the phantom call was cleared, POB resumed normal operation and dial tone/functionality was restored.
39. Twilio blocking outbound international calls due to destination number risk classification
Solution
Incidents were attributed to provider-level destination-number risk classification, carrier routing policies, or provider geo-permission rules that prevented calls to affected prefixes or individual numbers. In resolved cases Twilio reclassified the impacted destination ranges and adjusted the account’s international dial permissions/whitelist which restored service. Twilio required concrete example destination numbers and justification to complete reclassification. Provider-specialist teams routinely requested provider-side call identifiers and traces (for example CallSid/TaskSid, carrier call GUIDs and internal traces such as Oppylink) so the downstream carrier could review classification and routing decisions. Investigations observed mixed dialing-permission states in the Twilio console for samples (some numbers shown as "Blocked", others as "Allowed - enabled"), and geo-permission blocks that correlated with caller IP geolocation or use of a neighboring-country network (notably users near borders). Where a destination was also unreachable from a mobile device the failure reflected a PSTN-side outage rather than CPaaS blocking; conversely, destinations that worked from mobile but failed via the CPaaS typically indicated provider-side blocking or routing policy. Because many incidents presented no explicit error codes, collecting exact example destination numbers plus any provider-side call identifiers materially improved the provider’s ability to diagnose and restore service. One Vonage report recorded a carrier Call GUID for an intermittently blocked UK number but had no definitive resolution documented. For some regions (for example DE) numbers were generally enabled in Twilio except those classified as "Extreme High Risk."
40. Twilio POB creating duplicate outbound tasks and repeatedly calling same Salesforce opportunities
Solution
Support engineers captured call timestamps, duplicated Twilio Task SIDs, Salesforce Opportunity identifiers and session logs and escalated the consolidated data to Twilio for product-level analysis. In at least one occurrence, support scheduled and implemented a systemic deployment intended to stop repeated Oppy deliveries and asked the user to retest; that ticket was closed as Done without detailed logs or a documented root cause. Multiple incidents showed the dialer reporting reserved calls for other records while repeatedly delivering the same Opportunity and returning no Twilio error codes. Across the ticket corpus there was no definitive corrective configuration change or vendor-confirmed root cause recorded; vendor analysis was pending while incidents recurred during the June–August 2024 window.
41. Scheduled callback (Rückruftermin) recorded but not executed in Twilio
Solution
Investigation confirmed the callback record had been received into the system at the scheduled timestamp but was not acted upon by the managed service component. The incident was documented for deeper investigation by engineering, however no concrete remediation steps or root-cause determination were recorded before the ticket was closed.
42. Twilio Power Outbound slow call setup and repeated follow-up (FUP) assignment
Solution
The incident and a representative TaskSid were collected and the case was escalated to Twilio support for analysis. The ticket record shows it was forwarded to the vendor/engineering contact for deeper troubleshooting but contains no documented technical fix or configuration change; the ticket was closed after escalation without an internal remediation logged.