Skip to main content

Building Fog Networks with Telecom Cloud Services

The Internet of Things (IoT) and the process of connecting sensor devices to public cloud services will become commonplace in 2016. But it's unknown if the current hyperscale cloud computing infrastructure will be capable of adding billions of latency-sensitive devices in the future.

That being said, this anticipated surge in connected devices has some mobile network operators exploring new ways to complement their cloud service offerings, according to the latest market study by ABI Research.

One solution in particular is gaining adoption. 'Fog Networking' is a means to combat these challenges, with various telecom vendors developing their own architectures to bring the cloud closer to the end-user.

A distributed architecture, fog networking consists of multiple end-user clients or near-user edge devices that manage and operate on vast amounts of storage instead of using centralized cloud data centers.

Fog networking does this by enabling small purpose-built computing services to reside at the edge of the network, as opposed to on much larger servers in a hyperscale data center.

This doesn't replace the public cloud. Instead, fog networking enhances the cloud experience by managing some user data at the edge of the network. Administrators are then able to incorporate software analytics and additional security directly into their cloud services, as needed.

"The increase in connected devices presents two main challenges: the potential for unreliable communication due to network congestion and poor network connections for short-range wireless devices," said Sabir Rafiq, research analyst at ABI Research.

ABI believes that fog networking enables mobile network operators to build a new distributed telecom cloud computing market. It's built for applications where there's lots of real-time data, fast turnaround results are crucial and sending large amounts of data into the cloud is not the optimal solution.

The key benefits of fog networking include:

  • Better Data Access: Removes the need to transport large quantities of data to the data center.
  • Enhanced End User Experience: Creates an edge network at numerous points where demand is the greatest, thereby positioning services and applications closer to the end-user.
  • Geographically Dispersed Infrastructure: Enables real-time processing of big data with software analytics and offers administrators the ability to support location-based mobility demands.

Besides, more computing is already moving to the edge of the traditional mobile network. Regardless, some mobile service providers assume that fog networking may present additional security risks. However, the reality is that more security layers are employed using this model. Instead of the data moving between many network nodes, the data goes from the Internet into the servers and onto the nodes.

This approach means that extra firewalls and security checkpoints are in place to search for malicious activity, typically making it harder for known IoT cyber threats to cause a problem.

Popular posts from this blog

AI-Driven Data Center Liquid Cooling Demand

The rapid evolution of artificial intelligence (AI) and hyperscale cloud computing is fundamentally reshaping data center infrastructure, and liquid cooling is emerging as an indispensable solution. As traditional air-cooled systems reach their physical limits, the IT industry is under pressure to adopt more efficient thermal management strategies to meet growing demands, while complying with stringent environmental regulations. Liquid Cooling Market Development The latest ABI Research analysis reveals momentum in liquid cooling adoption. Installations are forecast to quadruple between 2023 and 2030. The market will reach $3.7 billion in value by the decade's end, with a CAGR of 22 percent. The urgency behind these numbers becomes clear when examining energy metrics: liquid cooling systems demonstrate 40 percent greater energy efficiency when compared to conventional air-cooling architectures, while simultaneously enabling ~300-500 percent increases in computational density per rac...