Technology Cards

Brush up on your IoT knowledge!

Looking for information on specific technology topics, or just curious to learn more about recent IoT trends? We welcome you to explore our IoT wiki and research what drives the industry.

Our Offering Cloud of Things

The "Cloud of Things" is a cloud-based Software as a Service (SaaS) offering for remote device monitoring and data management. It enables you to manage and control remote assets and allows real-time data analytics through secure data transfer. Thanks to this highly scalable solution, you can easily build your own device network that is perfectly suited to your business.

The Cloud of Things platform is offered as a fully managed service and can be accessed through common browsers on

personal computers, tablets or smart phones. The Cloud of Things web portal enables you to view all your registered devices and manage them remotely. The portal provides a dashboard for the graphical display of collected data, alarms or defined parameters (KPIs). The Cloud of Things web portal provides three main functionalities:

  • Cockpit helps you to organize and manage your devices
  • Device Management helps you to view, locate and manage your connected “things”
  • Administration helps you to administrate your account and add further users

To learn more about the Cloud of Things, please vist our webpage: https://iot.telekom.com/iot-en/platforms/cloud-of-things

World's largest certified portfolio of modules

Since its commercial launch of IoT services, Deutsche Telekom has invested to scale up its M2M and IoT business by certifying the world's largest portfolio of wireless communication radio chipsets and modules. To-date, over 50 NB-IoT and LTE-M modules have successfully passed interoperability testing on numerous Deutsche Telekom networks, securing a best-in-class quality of connectivity, faster time-to-market, and the widest range of implementation choices for our customers. This complements a rapidly-growing segment of 100 certified modules for 2G, 3G, 4G and 5G radio access bearers.

Figure: Certification Reduces Testing & Costs for IoT Devices

Our current certification lists include numerous single-mode (NB-IoT) and multi-mode (NB-IoT, LTE-M, 2G) solutions from suppliers worldwide. By collaborating closely with the industry, Deutsche Telekom aims to scale-up the IoT business for itself and a whole ecosystem of partners. We work with chipset suppliers to identify interoperability issues in their protocol stacks early on; this ensures that the connectivity "DNA" of many OEM and ODM modules can be improved well in advance. Each product in our portfolio is assessed for performance (power consumption, throughput, latency) as well as interoperability against key features of our networks. Not only do we certify an initial commercial firmware, but we track each product during its lifecycle to regularly assess potential design improvements. It is our commitment to ensure the best connectivity platform possible with our partners for your M2M and IoT solutions.

Figure: Certification Follows the Product Lifecycle

For the latest list certified wireless communication modules, please refer to the Certification documentation on the IoT Digital Shelf: https://tsi.iotsolutionoptimizer.com/Learn/UserGuides.

Please note that most of the NB-IoT and LTE-M components have been characterized for power consumption and integrated into the IoT Solution Optimizer service for virtual twin modeling.

Cloud of Things Device Certification

Deutsche Telekom pursues an IoT device strategy that accelerates integration of solutions into its service platform, the Cloud of Things. Please refer to the attached process document ("IoT Device Strategy_HW.pdf").

To trigger the onboarding process, suppliers should fill out the application document ("Application Hardware Partnering.docx") and send it to the E-mail address mentioned within the document. Following that, Telekom representatives will set-up a cloud account for your development teams so that they can conduct their tests against our cloud. The attached NB-IoT API description ("20170802-NBIOT-Protocol-Developerguide.pdf") is provided to help partners implement the required APIs properly. The test case catalog ("TestCaseCatalog_NB-IoTConnector_final_v1-0.docx") for the cloud connector for NB-IoT focuses on the MQTT-SN API interface. By selecting those test cases in the catalog which are relevant for your device, it is possible to self-declare compliance and proper implementation by filling out the results template ("SelfTestReport_NB-IoT_v2-5.docx").

Figure: Benefits of a Hardware Partner Co-operation

The following documents are provided:

Naturally, the best way is to integrate the APIs into a whole family of products is to do it on chipset- or module-level. As such, Deutsche Telekom works closely with industry partners to scale-up cloud integration. Please contact your module vendor to inquire as to the native integration of our Cloud of Things MQTT-SN APIs.

We look forward to reviewing your results and integrating your device into our partner catalog!

IoT Basics IoT Device

An IoT device is the remotely-deployed equipment (also traditionally referred to as "User Equipment", UE) that is used in M2M and IoT applications to monitor assets via sensing and actuation capabilities. Different realizations are possible, from low-end sensor nodes to high-end complex devices with multimodal sensing. Such IoT devices are used to capture data about events and assets, for example: inventory levels, climate, etc. The collected data is usually sent over a Wide Area Network (WAN), such as the 3GPP™ mobile operator network; rarely, they are connected directly to the Internet. In many cases, the IoT Device also supports Local Area Network (LAN) access technologies for exchanging data with other IoT devices or a collecting node with WAN-capability, referred to as a "gateway." The IoT device may itself act as such a gateway, referred to thereby as an "advanced IoT device," communicating with multiple "basic IoT devices" over its capillary network, or "M2M Area Network."

The need to support diverse use cases, or applications (ranging from asset control, remote monitoring and Smart Home, to PoS and vending machines) means that the hardware is typically purpose-built for the tasks at hand. Needless to say, most IoT devices do share common components and architectures. Specifically for NarrowBand IoT, it is critical to carefully select these components in order to achieve a highly-optimized, power-efficient design which delivers on the business case.

Figure: Common IoT Device Components

When assessing implementation aspects of an IoT device, it is necessary to consider its three solution layers:

  • Application Layer
  • Service Enablement Layer (Operating System / Device Management)
  • Connectivity Layer

The application logic running on the IoT device’s Microcontroller (MCU) and exchanging data with the IoT service platform runs on the microcontroller (MCU) represents the Application Layer. It sends AT commands to the IoT device’s integrated communication module/chipset in order to make use of mobile network bearers over the Connectivity Layer. Please note that the IoT device application usually sits on top of the IoT service provider’s Service Layer client.

Figure: Solution Layers of IoT Services

IoT Applications

The term "IoT application" may refer to three different things:

  • A purpose-built application running on an IoT device's microcontroller (acting as a client), communicating and exchanging data over the Internet with an application hosted on a cloud platform or server
  • IoT vertical industries, which cluster a whole segment of similar vertical solutions from different manufacturers which can generate and analyze data in compatible (IoT) or proprietary (M2M) formats
  • A purpose-built M2M vertical solution, or M2M silo; this may be considered (part of) an IoT service when it is interconnected with other M2M solutions over a commonly-shared service enablement layer, such as oneM2M

Within the context of the IoT Solution Optimizer service's configuration screens, the last definition is what is being referenced.

Machine to Machine (M2M)

Machine-to-Machine (M2M) refers to vertical, purpose-built solutions allowing communication between devices of the same type and a specific application, all via wired or wireless communication networks. Most M2M solutions are business-to-business (B2B)-focused, although business-to-consumer (B2C) applications also exist:

  • M2M solutions are used to capture data about events and assets via connected devices; for example, inventory levels, climate, etc.
  • Through diverse use cases ranging from asset control, remote monitoring and Smart Home, to PoS and vending machines, M2M delivers productivity improvements, reduced costs, increased safety and security

Figure: M2M Applications are Independent Vertical Silos

M2M solutions share several characteristics:

  • M2M services are siloed and point-problem-driven; broad sharing of data does not happen between applications
  • They usually leverage mobile network connectivity; devices are rarely connected directly to the Internet
  • Deployments are communication- and device-centric
  • Information- and service-centric models are not in focus
  • From an applications and services perspective, it is asset management-driven, and never data- and information-driven

Figure: M2M Solution Characteristics

M2M is not a new set of technologies, as it's been around for almost a decade. What makes it more and more relevant is the growing pressure across multiple industries to optimize their operations, to do more with less resources. Requirements are driven by efficiency improvements, sustainability goals or improved health and safety. There is also demand to better understand the physical environment and improve enterprise, consumer, governmental and societal value chains. In today's world, we have entered the crucial moment when the shooting demand starts converging with the availability of enabling technologies, available at the right cost. This is due to:

  • Improvements in technology and networking capabilities
  • Reducing costs for efficient components, including silicon, sensor and actuator technologies
  • Cloud technologies that enable a price-effective, scalable way to store vast amounts of data for analysis

Application areas where M2M is rapidly rapidly growing include:

  • Automotive Telematics
    • Connected vehicles used for safety, security, services and infotainment
    • Navigation
    • Remote vehicle diagnostics
    • Road charging
    • Pay-as-you-go driving insurance
    • Stolen vehicle recovery
  • Fleet Management
    • Vehicles management and tracking
    • Data logging
    • Goods and vehicle positioning
    • Security of valuable and hazardous goods
  • Smart Metering
    • Consumption monitoring of electricity, gas, water, etc.
    • Remote meter management
    • Energy consumption data
  • Security Systems
    • Security alarm connectivity for homes and businesses
    • Surveillance
  • Remote Asset Monitoring
    • Generalized monitoring of merchandise
    • Health applications (remote patient monitoring, etc.)
  • ATM / PoS
    • Connectivity to centralized, secure billing systems

The components of M2M system solutions always consist of several key elements:

M2M Applications

  • Application logic itself, implemented on the application server, and the devices it steers
  • The applications themselves are integrated server-side into the overall business process system(s) of the enterprise
  • Diverse processes co-exist, allowing for the highly-specific remote monitoring & control of assets; for example, there are diagnostic and data management functions

Service Enablement

  • This is a middleware that allows connected devices to communicate with applications
  • It provides generic functionalities which are commonly shared across different applications, thereby reducing costs and simplifying the application development
  • Due to the fact that most service enablement functionalities are shared, it is possible to define a service enablement capability for multiple M2M silos, thereby creating a path to ecosystems in an Internet of Things (IoT)

Networks

  • Networking provides for remote connectivity between the M2M device and the application-side servers
  • Different technologies can be used:
    • Wide Area Networks (WAN), such as mobile network operator 3GPPTM networks, fixed private networks or satellite links
    • Local Area Networks (LAN), commonly referred to as “capillary networks” or “M2M Area Networks”

M2M Devices

  • The products are attached to asset(s) of interest, providing sensing and actuation capabilities
  • Different realizations are possible, from low-end sensor nodes to high-end complex devices with multi-modal sensing
  • Two types can be defined:
    • Basic M2M devices, which send data over LAN to a gateway device
    • Advanced M2M devices, which may collect data on their own and/or aggregate data from basic M2M devices in the function of a gateway. These devices may also include processing capability to pre-filter and/or analyze data prior to sending it to the application server.

Figure: M2M System Solution Architecture

Internet of Things (IoT)

The Internet of Things (IoT) refers to a horizontal set of enabler technologies, systems and design principles which integrate multiple vertical M2M application and device solutions. Unlike M2M, IoT is:

  • Ecosystem- and innovation-driven
  • Data- and information-driven
  • Information- and service-centric
  • Device and network-technology agnostic
  • Based on open source code, open APIs, data specifications and (private and public) data marketplaces

Whereas M2M is predominantly implemented for B2B, IoT facilitates the emergence of B2C segments and new ecosystems, leveraging cloud-based models (open web- and “as-a-Service”-enabled). For IoT there is a shift away from device and connectivity-centricity towards services, data and intelligence. The current M2M model of disconnected silos with own stakeholders, ownership, information, processes and services, is integrated into a “common fabric” of services and data. The promise of IoT is that the Internet will no longer be about people, media and content, but shall include all real-world assets, as intelligent nodes, exchanging information, interacting with people, supporting business processes of enterprises and creating knowledge.

Figure: IoT brings a Cloud Service-Based Society

The opportunities of IoT are cross-value chain, with value system integration across ecosystems. This helps maximize the efficiency in operations, and many existing and new actors may assume diverse roles providing new services. That said, new challenges also emerge, particularly centered around the increasing complexity in information and service management. There is clearly a need for more powerful analytics tools, visualization software and decision support systems. It is not surprising that advocates of proprietary IoT are heavily investing to build up these capabilities. Furthermore, a real-time control of complex operations and autonomous control systems will likely require distributed application software architectures, and real-time capability in networks. Numerous mobile network operators invest heavily in edge computing technologies to enable this new paradigm.

Common IoT use cases are ecosystem-centric. This means that in comparison with the M2M verticals, there are numerous new capabilities which in today's world have yet to become reality:

  • Smart Cities
    • Integrated environments
    • Optimized operations
    • Convenience
    • Socio-economics
    • Sustainability
    • Inclusive living
  • Agriculture
    • Forestry
    • Crops, livestock and fisheries
    • Urban agriculture
    • Process Industries
  • Robotics
    • Manufacturing
    • Natural resources
    • Remote operation
    • Automation
    • Heavy machinery
  • Infrastructure
    • Buildings & home management
    • Roads & rail
  • Consumer Electronics
    • Connected gadgets
    • Wearables
  • Robotics
    • Participatory sensing
  • Automotive Transport
    • Autonomous Vehicles
    • Multimodal Transport
  • Health / Well-being
    • Remote monitoring
    • Assisted living
    • Behavioral change
    • Treatment compliance
    • Sports and fitness
  • Environmental
    • Pollution management
    • Air, soil management
    • Noise management
  • Retail Banking
    • Micro-payments
    • Retail logistics
    • Product lifecycle information
    • Shopping assistance
  • Utilities
    • Smart grid
    • Water management
    • Gas, oil and renewables
    • Waste management
    • Heating, cooling systems

So how is IoT actually implemented?

IoT introduces a common framework so that M2M applications can share M2M infrastructures, environments and networks:

  • A Service Layer middleware enables standardized, common service functions which are applicable to different applications
  • Inter-technology and inter-application data-sharing allows new service and business opportunities to emerge, offering potential for market growth

Figure: IoT breaks down the M2M Silos

The horizontal application enablement allows the IoT system integration, acting as an connectivity layer-agnostic middleware, connecting producers and consumers securely, and transparently. It supports diversity across:

  • IoT Device types
  • Sensor / actuator data types
  • Underlying communication systems, hiding the complexity of WAN and LAN network usage from applications

The oneM2M standard introduces mandatory or optional sub-functions called Common Service Functions (CSF) under the Common Services Entity (CSE) component. This represents an instance of a set of CSFs in M2M environments.

Figure: Common Service Functions (CSF) in IoT

oneM2M standard employs a simple horizontal, platform architecture fitting within a 3-layer model of applications, middleware services and network services. The "Application-Network-Device"s Model of M2M vertical silos is replaced in IoT with a horizontal "IoT Device-Gateway-Server Model." The application logic of M2M solutions sits as an Application Layer on top of the common Service Layer enabler, and a Network Layer below it handles device management, location services, device triggering, etc. For the latter, oneM2M standardization specifies the Network Service Entity (NSE), which provides the Service Layer with network services supported by the underlying network.

Figure: IoT Device-Gateway-Server Model

oneM2M standardization also defines the Application Entity (AE), which contains the application logic of M2M solutions. Underlying networks provide data transport services between entities in the oneM2M System. Such data transport services are not included in the NSE.

Figure: Building a Common API for IoT

The IoT Solution Optimizer is fully compatible with the oneM2M standard. oneM2M protocol overhead is to be considered in your design as part of the application payload, which is wrapped in a messaging, management, and/or transport protocol - or sent over Non-IP Data Delivery to/from the IoT Device.

IoT Vertical Industries Parking

What is the Smart Parking business about?

The number of vehicles on the road is rapidly outpacing the supply of available parking spots. Searching for a parking spot in the city often means wasted time and extra fuel costs for drivers, as well as unnecessary air and noise pollution for residents. Parking has become a widespread issue in urban development. This problem can be mitigated by introduction of smart parking. Smart parking aims to help individually match drivers to parking spots, improve parking space utilization, reduce management cots, and alleviate traffic congestion.

What are the key benefits of Smart Parking?

Smart Parking solutions collect parking data (such as occupancy and duration), and relay this information to service providers. This improves fee collection and reduces economic losses. Drivers can obtain the real-time parking space information. For example, when only few parking spots are available, drivers can be directed to the next vacant spot. This eases traffic congestion cause by frustrated drivers searching for potential spots. Furthermore, self-help payment saves manpower and allows parking service providers to reallocate labor from toll collection to parking supervision.

Energy Grid

The smart energy grid digitizes native electricity networks, improving the reliability and transparency of the whole energy supply chain. Two-way digital communication supplies electricity towards customer with benefits. The focus of Smart Grid is the autonomous communication between electricity producer, power storing components and consumer. Advanced analysis tools, monitoring and control features makes it possible to maximize efficiency and reduce power losses and costs in the whole electricity supply chain.

In case of renewable energy technologies, Smart Grid is a must because electricity production depends on weather conditions (e.g. solar energy needs sun). Smart Grid responds on conditional changes and chooses the most efficient source. The included energy management applications help to reduce cost by automatically changing the criteria for selecting power sources (e.g. consumption during higher costs).

The key features of Smart Grid are:

  • Balanced load in the whole electricity network
  • Smart Market - Flexible demand driven market
  • Decentralized power production
  • Monitored network for efficiency gains and cost reduction
Asset Tracking

The tracking of assets empowers your business to perform constant monitoring of your valuable goods. Questions about where a company's valuable mobile devices and machines are currently located are a thing of the past. Asset Tracking lets you localize important assets and machines at any time, avoiding delays and unnecessary, expensive replacements of misplaced assets. In case of theft, Asset Tracking can trigger an alert quickly and expedite the resolution of the case. After all, the theft of working tools and supplies, machinery, and materials from companies and construction sites, for example, causes heavy losses every year.

Just some examples where asset tracking can save money and time:

  • General: Track your expensive equipment and get alarmed, when Equipment is leaving the defined geofence (e.g. Campus)
  • Healthcare: Track your medical equipment inside the hospital
  • Rental: Keep real-time tracking of leased equipment, cars, bikes and create invoices towards customer automatically.
Public Transport

Revolutionizing Public Transportation with IoT

Connected cities are the future, and the capabilities for public transportation are incredible. Cities can upgrade buses and trains by incorporating the Internet of Things (IoT) into public transportation, making the passenger experience better and safer, as well as reduce costs.

Real-time tracking

IoT allows the real-time tracking of buses and trains, which benefits both the rider and management. The rider knows when and where the transport will arrive and the manager can alert the public to any schedule delays. New connected technologies will not only tell a rider when the train will arrive, but it will also alert them to which cars are full and which have room to spare. The organizations can track the vehicle movement, trace the location, monitor the activities, and develop measure to ensure safety.

Wi-Fi onboard

Through IoT enabled devices, public transport vehicles can offer Wi-Fi connectivity to enhance customer experience and improve consumer journey. Being able to work while on public transportation in the morning or finalize emails in the evening while on the way home has been a much-desired addition to daily commutes.

Optimize city transport dimensioning

The occupancy level could be measured in real-time and that data reported to transportation systems. If there is a huge sporting event ending, the transportation agents can deploy more trains or buses to ensure there is enough transport for all event-goers. Connected platforms can make this a reality for cities. The cost and operational savings could be enormous.

Unforeseen Events

IoT will help cities manage unexpected events much more efficiently. District managers can send alerts to citizen’s phones, offering alternative routes home if there is a problem with trains or a specific public transportation method. Transit agents can spread the word ahead of rush hour to avoid massive delays. Agents can also develop strategic contingency plans for emergency events, becoming prepared for anything.

Predictive Maintenance

Instead of waiting for a train or bus to break down, IoT enables predictive maintenance techniques, which means employees will know when a part might fail. They can then fix the part ahead of time, instead of waiting for it to break down, causing delays. Imagine not having a train break down, completely stopping and backing up passenger movement; predictive maintenance can make that happen. IoT in public transportation willl also mean less downtime. Unexpected downtime of buses and trains can be expensive, which means cities could see drastic savings from IoT.

Waste Management

What is Waste Management?

The industry referred to as "Smart Waste" includes the activities and actions required to manage waste from its inception to its final disposal. This includes the collection, transport, treatment and disposal of waste, together with monitoring and regulation of the waste management process. Efficient waste management is fundamental for so-called "Smart Cities."

IoT provides waste monitoring, leading to cost-reduction and operational efficiency through improved energy management and reduced personnel costs, but there are other far-reaching benefits from using IoT devices (e.g. sensors on smart bins) and data. Observing an IoT-connected waste or recycling bin offers a clue. A smart bin is a waste or recycling bin outfitted with a sensor that can detect bin fill level, collection events, fire, tilt and temperature. There are a variety of sensors on the market that use ultrasonic sensors, laser measurement and image recognition to collect data.

What are the benefits of Waste Management?

Data from smart bins offers immediate and obvious benefits; however, when examining the data generated from a broader perspective, there is much more value than a trip saved to see if waste has been/should be collected:

  • Fill level measurement: With insight into which bins are not full on pick-up day, managers can pose the question of whether these bins be picked up less often and therefore reduce costs. On the flip side, identifying which bins are full before pick-up days and are likely to overflow can help managers prevent associated clean-up costs and cleanliness issues.
  • Collection events: The data gathered allows managers to determine if the bin is being collected as per the agreed-upon schedule with their waste/recycling vendors. It also provides a quick overview of how often pickups are missed as well as the time of the day pickups are typically being completed.
  • Fire alerts: Real-time and accurate alerts in cases where a container catches fire.
  • Tilt alert: Instant notification of when a bin gets tipped over.

Smart Waste solutions help one identify patterns and trends on how bins fill up:

  • At some properties, in certain jurisdictions, bins can fill up on Fridays as workers complete end-of-week cleaning and often remain empty from Monday to Thursday.
  • Bins may be more or less full in the winter or summer.
  • For properties with weather dependent activities, such as patio dining areas, there may be a clear link between sunny days and more waste volume.

Comparing similar buildings is also possible, as some buildings generate more or less waste/recycling. Tenants in these buildings have different services based on waste output; for example, a restaurant tenant requires more frequent collection than an office tenant. Understanding these trends allows managers to properly forecast and plan waste services when new tenants move into a property.

Finally, Smart Waste helps cities combat illegal dumping. Through image-based camera systems that can visually assess waste types, managers are able to identify the bins where illegal dumping occurs and can take appropriate actions, such as locking bins and installing video surveillance if required.

Waste Management is essential for sustainable Smart Cities:

Ultimately, the goals of Waste Management can be defined as such:

  • Drive accurate sustainability reporting by using the actual volume measurement instead of estimates or inaccurate weights.
  • Charge tenants by the verified amount of waste or recycling that is generated instead of an estimate or inaccurate cost sharing of predicted volumes.
  • Reduce traffic, noise and congestion on a property with less truck visits due to reduced pickups.
  • Reduce wear and tear of parking lots, doors and enclosures with less truck visits.
  • On demand pickup of waste and recycling eliminates unnecessary truck trips with no overflow.
Logistics

Better resource management, optimized processes, enhanced customer experiences and minimized costs: the transport and logistics sector is far ahead of other sectors and large parts of industry when it comes to digitization.

Image Transport and Logistics

The transport and logistics sector benefits from digitization and gives it impetus. Thanks to new digital technologies, this comparatively labor-intensive sector has been able to coordinate its considerable organizational efforts more efficiently and improve customer and partner contacts, use digital technologies to further develop its business models and thus attract new customers and open new markets.

Growing Transport Volumes

Companies show that the market is changing and shifting. Since the carrier started making deliveries to consumers its volumes have risen dramatically, forcing them to confront significant challenges. Because consumers expect delivery in 24 hours. And they want to be able to trace their delivery at all times.

Transparency is Key

Speed and transparency are absolute musts. Freight carriers wanting to offer their customers seamless shipment tracking must start by digitally connecting their drivers and vehicles and synchronizing the resulting data at their head office. This makes it possible to keep customers informed about their delivery's location and status at all times and provides a complete overview of the transport chain.

Optimal Solutions

Information and communication technologies are essential components of the digital transformation: Thanks to optimal connectivity, cloud computing, big data and security, in the interconnected world of the Internet of Things machines, devices and sensors can realize their full potential and provide optimal solutions for air and cargo ports, passenger transport, rail and road freight traffic and fast growing parcel delivery services. Connected processes help optimize storage facilities and devise new transport routes and options, for example. Moreover, shipments and vans can be monitored remotely and downtime avoided through predictive maintenance.

Condition Monitoring

Imagine a world where manufacturers and service providers can know in advance when something will go wrong with their machines.

In industry, every machine failure or production downtime causes companies great expense. With condition monitoring, companies can easily check the status of their machines and intervene whenever necessary. Many sensors within the machines produce huge volumes of data that can be analyzed. A particular threshold is specified that the data stream's values must not exceed. As soon as this particular threshold is reached, the system sounds the alarm.

Predictive maintenance brings an early warning system on top of this data. Through the use of artificial intelligence, technicians can intervene remotely or on the spot, before the thresholds are met. Possible anomalies are identified, and a comparison is made with previous incidents that have already happened. This allows business to assess the entire stock of deployed machines for the risk of possible future failure.

Street Lighting

What is smart street lighting?

Smart street lighting refers to public street lighting that adapts to movement by pedestrians, cyclists and cars. It is an adaptive street illumination that dims when no activity is detected, but brightens when movement is detected. This type of lighting is different from traditional, stationary illumination or dimmable street lighting that dims at pre-determined times.

How is it implemented?

Street lights can be made intelligent by placing cameras or other sensors on them, which enables them to detect movement. Additional technology enables the street lights to communicate with one another. Different companies have different variations to this technology. When a passerby is detected by a camera or sensor, it will communicate this to neighboring street lights, which will brighten so that people are always surrounded by a safe circle of light. Street lights illuminate at a longer distance ahead of the pedestrian than behind the pedestrian in the Smart Lighting concept.

White Goods

What are white goods?

There's a revolution in modern white goods as the industry moves from its electromechanical roots to electronic control, using sensors to boost the intelligence of these appliances. Home appliances are getting smarter connected and more efficient with the Internet of Things (IoT). IoT brings power efficiency and automation for today and tomorrow's smart appliances. It increases effectiveness and can bring the following advantages:

  • Smart connected white goods give the manufacturer a touch point to communicate to the end user.
  • Telemetry data can be used to determine whether something’s going to break before it actually breaks.
  • Preventive maintenance can now be used to take preventive actions that can stop problems occurring, or to enable a service department to fix a problem in a convenient maintenance window with limited disruption to the thing’s service.
  • Usage data, a new class of data, can now be accessed and used to understand user experiences and interactions.
  • Continuous engineering can be used to optimize future designs, creating improvements based on real user experiences.
  • New updates and features can be added based on interests and usage patterns thereby increasing loyalty and satisfaction.
  • Data as a service opportunities can be identified and used to better enable a wider set of stake holders across the value – different companies, partners, and even different departments within an organization. There are several organizations that stand to benefit from interacting with the data from that washing machine, ranging from utilities, insurance, washing powder–all of which have a vested interest in the connected washing machine and its user.
  • New business transformation opportunities can be explored whereby the washing machine itself can be turned into a service where the thing is not purchased, but given to the user, with the user adopting a pay per wash experience.

The best is yet to come...

Many appliance makers may soon become content providers in their own right, by developing intelligent variants of their product. For example, appliance makers may build washing machines that can determine which RFID-tagged clothes can be washed together and how much detergent can be used on a load. Future refrigerators will be able to propose nutritious meals and recipes using RFID-tagged food while warning consumers which foods are about to expire, etc.

Environmental Monitoring

The applications of IoT in environmental monitoring are broad − environmental protection, extreme weather monitoring, water safety, endangered species protection, commercial farming, and more. In these applications, sensors detect and measure every type of environmental change.

Air and Water Pollution

Current monitoring technology for air and water safety primarily uses manual labor along with advanced instruments, and lab processing. IoT improves on this technology by reducing the need for human labor, allowing frequent sampling, increasing the range of sampling and monitoring, allowing sophisticated testing on-site, and binding response efforts to detection systems. This allows us to prevent substantial contamination and related disasters.

Extreme Weather

Though powerful, advanced systems currently in use allow deep monitoring, they suffer from using broad instruments, such as radar and satellites, rather than more granular solutions. Their instruments for smaller details lack the same accurate targeting of stronger technology. New IoT advances promise more fine-grained data, better accuracy, and flexibility. Effective forecasting requires high detail and flexibility in range, instrument type, and deployment. This allows early detection and early responses to prevent loss of life and property.

Commercial Farming

Today's sophisticated commercial farms have exploited advanced technology and biotechnology for quite some time, however, IoT introduces more access to deeper automation and analysis.

For example, the sensors measure moisture at different levels, and send data to a central system. The system analyzes this data and waters only the crops in need. Much of commercial farming, like weather monitoring, suffers from a lack of precision and requires human labor in the area of monitoring. Its automation also remains limited. IoT allows operations to remove much of the human intervention in system function, farming analysis, and monitoring. Systems detect changes to crops, soil, environment, and more. They optimize standard processes through analysis of large, rich data collections. They also prevent health hazards (e.g., E.Coli bacteria) from happening and allow better control.

Health Monitoring

Health-related issues can directly impact the quality of life of a person and development of a nation, and as the world population rises and healthcare costs continue to skyrocket, more individuals and organizations across the healthcare industry are looking for ways to reduce costs and improve patient care. The Internet of Things (IoT) has numerous applications in healthcare, from remote monitoring to smart sensors and medical device integration. It increases both the accuracy and size of medical data through diverse data collection from large sets of real-world cases. It has the potential to enhance existing technology, and the general practice of medicine by improving how physicians deliver care, as well as keep patients safe and healthy and reduce costs.

Real-time Health Monitoring

IoT health monitoring traces patient's health with the assistance of sensors and Internet. The health observation system can keep track of patient's pulse rate, eco rate of heart, pressure level rate, temperature etc. If the system detects any abrupt changes in patient heartbeat or temperature, the system mechanically alerts the user concerning the patients standing over IoT and additionally shows details of heartbeat and temperature of patient live over the Internet. One of the better ways is where the doctors are able to certainly and quickly use the relevant patient information through the help of IoT to take suitable actions. This tremendously improves the quality of information and the patient care in the Medical field. Making use of embedded wearable sensors, the system monitors the health parameters dynamically.

Remote Health Monitoring

The Internet of Things improves patient safety and provides better customer experiences through remote health condition monitoring. Healthcare providers can access real-time patient data frequently, giving them better visibility into patient health and allowing them to provide timely support and treatment. Physicians can serve more patients by allowing some individuals to return home to finish treatment while medical staff continues to monitor their condition remotely, thus reducing costs associated with lengthy hospitals stays and improving the patients satisfaction. Healthcare companies are expanding their in-home services by using remote condition monitoring to offer new independent living solutions designed for aging and disabled populations. IoT has been widely used to interconnect the available medical resources and offer smart, reliable, and effective healthcare service to the elderly people. Health monitoring for active and assisted living is one of the paradigms that can use the IoT advantages to improve the elderly lifestyle. In this paper, we present an IoT architecture customized for healthcare applications. The proposed architecture collects the data and relays it to the cloud where it is processed and analyzed. Feedback actions based on the analyzed data can be sent back to the user.

Medical equipment

Mobile medical equipment (e.g. a portable ultra-sound machine), in hospitals, is used by different patients and sometimes it might be difficult to locate where the equipment was last used. IoT helps tracking medical equipment. It can also provide predictive maintenance of the equipment to ensure that the equipment does work when needed.

Care

Perhaps the greatest improvement IoT brings to healthcare is in the actual practice of medicine because it empowers healthcare professionals to better use their training and knowledge to solve problems. They utilize far better data and equipment, which gives them a window into blind spots and supports more swift, precise actions. Their decision-making is no longer limited by the disconnects of current systems, and bad data. IoT also improves their professional development because they actually exercise their talent rather than spending too much time on administrative or manual tasks. Their organizational decisions also improve because technology provides a better vantage point.

Medical Information Distribution

One of the challenges of medical care is the distribution of accurate and current information to patients. Healthcare also struggles with guidance given the complexity of following guidance. IoT devices not only improve facilities and professional practice, but also health in the daily lives of individuals.

Emergency Care

The advanced automation and analytics of IoT allows more powerful emergency support services, which typically suffer from their limited resources and disconnect with the base facility. It provides a way to analyze an emergency in a more complete way from miles away. It also gives more providers access to the patient prior to their arrival. IoT gives providers critical information for delivering essential care on arrival. It also raises the level of care available to a patient received by emergency professionals. This reduces the associated losses, and improves emergency healthcare.

Research

Much of current medical research relies on resources lacking critical real-world information. It uses controlled environments, volunteers, and essentially leftovers for medical examination. IoT opens the door to a wealth of valuable information through real-time field data, analysis, and testing. IoT can deliver relevant data superior to standard analytics through integrated instruments capable of performing viable research. It also integrates into actual practice to provide more key information. This aids in healthcare by providing more reliable and practical data, and better leads; which yields better solutions and discovery of previously unknown issues. It also allows researchers to avoid risks by gathering data without manufactured scenarios and human testing.

Metering

Smart meter devices provide real-time data on electricity, gas or water consumption of the customers back to the supplier itself. Consumers can receive more specific statements about their energy consumption. Detailed feedback on energy consumption with accurate real-time data from smart meters is available and power outages can be reduced by more precise forecasting. Smart metering is part of the smart grid concept.

Service Buttons

Service buttons are smart solutions for logistics, production plants, workshops, construction sites or hospitals in the Internet of Things. Pickup, ordering and maintenance processes can be initiated by pressing the device's button. For example, spare parts can be ordered at the touch of a button. It avoids unnecessary routes and additional manual operations.

Easy integration based on the stand-alone format makes service buttons independent from the power supply and corporate networks. The device is powered by a standard battery and communicates via the mobile network (e.g. via NarrowBand IoT).

X-Sharing

The sharing economy is the most prominent term for collaborative consumption or collaborative economy like car sharing, bike sharing or peer-to-peer property rental. With the use of digital technology including modern radio technologies companies can easily track leased goods, control them remotely, monitor the conditions and duration of usage. One major benefit of sharing goods for customers, is to borrow the good at one location and return the good on another location. The sharing company can track in real time and controls always their assets.

In case of car sharing many information can be read out, distributed and used for business purposes

  • Duration of usage
  • Fuel consumption
  • Car Status (e.g. diagnostic trouble codes)
  • Real time position

For shared bike solutions low power consumption, better coverage and low latency is needed. Narrow Band IoT solves the problems of high power consumption and short battery lifetime. NB-IoT chipsets and modules do not require chargers for the bicycles and reducing the overall costs of such a bicycle. The focus area for bike sharing are:

  • Bicycle monitoring
  • remote control (unlock/lock)
  • tracking of the position

A variety of information can be accessed by offering sharing solutions, including usage rankings, cycling group statistics, general usage trends, usage trends by region or time, data connection issues, and location changes.

Agriculture

Smart agriculture also known as smart farming is a concept to improve the crop by integrating newest technologies like sensor networks in agriculture. Those solutions should help to transform and reorient agricultural processes to ensure efficiency gains react on climate changes and secure food for upcoming population grows. Also predicted rain falls should help farmers by planning their crop calendars. With sensor networks it is possible to monitor the conditions of the agricultural areas regarding soil moisture, temperature, nutrient deficiency and helps to reduce bad harvests and costs and increase yields.

Wearables

Wearable technology is a hallmark of the Internet of Things and the most ubiquitous of its implementations to date. The efficiency of data processing achieved by various smart wristwear, smart clothes, and medical wearables is getting to the point where this consumer-oriented side of the IoT technology will bring exceptional value to our lives and become a new fashion along the way.

Some of the first functions that wearable devices are already delivering are related to identification and security. Maybe you don't consider the badge you wear at work a wearable device, but it does provide identification and security features useful within the work environment. Some advanced badges even include some biometric capabilities (such as fingerprint activation, so only the badge's owner can use it to open a locked door) to improve security. Badges can also include capabilities for location sensing, useful in emergencies to make sure everyone has successfully evacuated the building. A wearable bracelet provides a more reliable indication of location since it is less likely to be left in a jacket on the back of a chair.

Health- and fitness-oriented wearable devices that offer biometric measurements such as heart rate, perspiration levels, and even complex measurements like oxygen levels in the bloodstream are also becoming available. Technology advancements may even allow alcohol levels or other similar measurements to be made via a wearable device. The ability to sense, store, and track biometric measurements over time and then analyze the results, is just one interesting possibility. Tracking body temperature, for example, might provide an early indication of whether a cold or the flu is on the way.

Some additional capabilities of wearable devices are more mundane, but might also provide information that could be useful in adjusting environmental controls. Wearable devices could tell if you have your jacket on in the car or if it's just in the back seat (perhaps by placing a few stress measurement device threads within the fabric of the jacket). This could be helpful in keeping the car temperature at a comfortable level. If your wristband can measure perspiration levels that could also be used as a data point for adjusting both temperature and humidity.

The promise of the IoT is based on pervasive connectivity and when associated with large collections of connected devices, significant benefits can accrue. Your wearable devices could interact with the devices of others in a crowd. Of course, privacy issues will continue to be a big concern for years to come. For example, would you want to know if someone sitting near you on the train had a high fever? Clearly you might want to know this, but the person with the fever might not want to broadcast it. If you both used the same healthcare provider; however, maybe that information would be shared, perhaps controlled via a smartphone filter.

Safety Solutions

Workplace Safety

Accidents in the workplace are a well-documented problem, with potentially devastating consequences for employees and employers alike. Despite clear safety regulations and procedures, risk management remains a huge challenge for employers in many industries, but by using emerging technologies such as the Internet of Things (IoT) and pervasive cloud connectivity, organizations can now pull in work environment data, analyze it, and respond in ways to help keep workers safe and healthy.

IoT excels at keeping an eye on things; collecting data by means of connected sensors that help us understand our working environments. They can monitor everything from factory equipment to the location and well being of human beings. This makes them a great tool for monitoring potentially hazardous environments when it’s impossible to do so manually with any consistency.

When it comes to monitoring both internal and external factors, sensors make perfect sense. For example, you can collect IoT data from wearables (like helmets, jackets and watches, for instance) and combine that with environmental sensors to monitor both workers’ wellbeing and the state of their working environment. By tracking indicators of physical fitness like heartbeat and skin temperature, sensors can help watch out for employees who are starting to show strain or other signs of potential problems, and preventative action can be taken.

The use of an IoT solution also enables companies to monitor potential hazards within the working environment. For example, sensors can capture carbon monoxide levels, weather events, temperature and vibration, plus many others."

The power of IoT in the workplace lies in its ability to:

  • Ensure long-term safety – By analyzing data over weeks and months, you can calculate long-term exposure to potentially hazardous conditions. With integration into HR and scheduling solutions, you could then trigger re-rostering processes to keep exposure levels below acceptable limits.
  • Improve compliance – With integration into business data regarding local, regional, or national worker safety regulations, you can monitor compliance and demonstrate your adherence to the rules as needed.
  • Predict issues and take proactive action – Using machine learning algorithms, you can analyze data across worksites to detect patterns that can predict potential issues before they impact workers.
  • Track workers with context awareness – With geolocation capabilities and schematics on particular worksite environments stored in business applications, you can track workers’ locations and alert them, for example, to not enter secured areas for which they may lack authorization.
  • Speed and improve rescue operations – In a disaster situation, you can collect critical data in real time, enabling rescue crews to understand the situation quickly and plan rescue operations that have a higher chance of success.

Safety at Home

Our homes are rapidly being filled with IoT devices that promise to increase convenience, improve energy savings and strengthen safety. Here are examples:

  • Home automation can provide assistance to residents with disabilities, potentially decreasing the level of support needed from caregivers and increasing independence.
  • Smart lightbulbs might provide peace of mind to a resident by illuminating the house or a room before they enter it.
  • Pet cams and feeders might provide needed support or comfort to a pet when their owners are away from home, or help reassure the survivor of a pet’s health or safety.
Security Products

IoT home security systems keep one safe and promise to strengthen personal security at home, by providing remote control and surveillance through Internet-connected devices in the home. They can monitor the security status in real-time and send alerts to the owner in case of any trespassing and raise an alarm optionally. Security cameras, video doorbells, and other security devices could be used to notify the home when someone approaches or enters the house. These devices might also gather evidence to document violations of a protection order or other criminal behavior.

Personal security is also important when you are out, on the go. When we talk about the personal security, classic stun guns and pepper sprays come to mind. But many helpful smart personal security devices may help you on the spot as well as they inform your beloved ones and official authorities about your current status along with your GPS location. Some IoT devices can predefine the following alarm functionalities:

  • Geofence / wrong area
  • Serious impact / not moving alarm
  • Unusual movement / speed
  • External triggers
  • Dynamic roaming for teams
Pet Tracking

Pet tracking solutions have been designed to allow a quick and reliable identification of pets’ location. These devices are often regarded as tags due to their small form-factor, adjusted to fit even small pets like cats, and offering an extended durability for harsh outdoor conditions. Pet trackers which rely on the Mobile IoT network provide high tracking accuracy on a pet’s whereabout over long distances and in remote locations. In this way, pet owners can track on demand, in near real-time. Furthermore, an alert or notification can be automatically triggered based on a particular cat or dog’s behavior, e.g. leaving a geo-fenced area, such as the backyard. This allows one to quickly identify potential dangers. By using solutions for pet localization, owners can significantly increase the chances of finding a lost pet again.

Misc. Product Categories Development Kits

IoT development kits are hardware platforms designed for the rapid prototyping of IoT applications. IoT developer kits combine microcontrollers and processors with wireless chips and other components in a pre-built, ready-to-program package that can be used to prototype IoT devices. APIs are usually provided for quick development of applications on top of the supplied platform.

Network Quality Monitoring

IoT network monitoring provides an opportunity for service providers to be more relevant and better meet the needs of connected consumers with a growing number of devices in their lives. Network monitoring allows to:

  • Detect issues with a live view of the Mobile IoT Wide Area Network, or a peripheral Local Area Network (LAN)
  • Receive customized alerts for suspicious activity
  • Troubleshoot emerging issues
  • Improve network reliability and uptime
  • Reduce operating costs
Multipurpose Gateways

IoT gateways (also referred to as "advanced M2M devices" - as opposed to "basic M2M devices," such as sensors and actuators) are important components in the IoT ecosystem, acting as a bridge between endpoints (e.g. sensors, actuators etc.) and the cloud. This connection point exchanges data between the local area network (LAN) of devices performing sensing and actuation capabilities and the gateway, and the wide area network (WAN) between gateway and mobile operator network. The LAN is often referred to as the "capillary network" or "M2M area network." It therefore must support multiple communication technologies and protocols (e.g. 3GPP™, WiFi, LAN, CAN-Bus, Zigbee etc.).

Not only does the gateway transfer data between the sensors, actuators and the cloud, it is an aggregator for data, analyzing, merging and consolidating data to minimize the amount of messages and volume of data that needs to be forwarded to the cloud. This has positive impacts on latency, response time and network delivery costs.

It is also necessary to differentiate between routers and gateways because routers regulate the traffic between similar networks (e.g. fixed line). An IoT gateway can regulate the traffic of completely different network technologies.

Antennas Multiband Chip Antenna Boosters

The new generation of multiband chip antennas called antenna boosters are frequency neutral. This means that the frequency range of the chip antenna is not predetermined by the component itself but can be completely optimized through electronic circuit design. Radiation optimization at multiple and simultaneous frequency bands is achieved through a simple circuit design that interconnects the antenna chip with the radio frequency (RF) transceiver: the matching network.


The antenna design flow using this new generation of components is much faster and easier than the traditional one. This is because the antenna part remains fixed (a standalone, miniature, SMD chip) while any optimization throughout the design process is done by adjusting the matching network. In effect, the design of the matching network can be fully automated by using network synthesis tools available in most microwave circuit simulators used in the wireless industry. This streamlines the design process in a flexible way, making sure that updates in the design of IoT device, including changes in the PCB size, embedding of covers, batteries, etc., is flexibly accomodated through a change of the matching circuit.

Thanks to this design flow, an antenna booster can be configured to operate at any band and in any device by just designing a suitable matching network. For example, in an IoT device requiring operation within the NB-IoT 900 MHz band (monoband solution), such a matching network might consist of two simple passive component circuits in a basic L-type topology (e.g. an LC network). A richer frequency response can also be achieved by just increasing the number of passive components in the matching network. In this manner, multiband cellular coverage (up to worldwide global coverage in some components) can be achieved with exactly the same antenna chip.

Figure: Monoband Matching

Figure: Multiband Matching

As seen from these two examples above, the antenna booster remains the same component for both scenarios, and the only part that changes is the matching network design of each case. This flexibility allows one to reuse the same antenna part through multiple IoT designs, enabling economies of scale while simplifying logistics in the supply chain. With the proper multiband matching network, this generation of antenna chips enables designing a single SKU platform that features a global, worldwide wireless coverage, which make them ideal components to develop a full wireless IoT reference design with the smallest and slimmest form factors in the market.

Source: Ignion; to learn more, visit: ignion.io

Antennas

Antenna components generate and collect electromagnetic energy, acting as a transducer - a device that converts electrical energy from a circuit into electromagnetic (EM) energy. In the "Uplink" communication originated by the IoT device, the device transmitter develops, amplifies, modulates, and applies radio-frequency (RF) signals to the antenna. These RF currents flow through the antenna to produce electromagnetic waves which radiate into the atmosphere. Likewise, on the "Downlink" communication to the device, received electromagnetic waves "cut" through the antenna and induce alternating currents for use by the device's receiver. To have sufficient signal strength at the receiver, either the power which is transmitted must be very high, or the efficiency of the transmitting and receiving antennas must be high because of the propagation losses as the electromagnetic wave travels between the transmitter and receiver.

Electricity and electromagnetic waves are interrelated. Electromagnetic waves consist of an electric field and a magnetic field. Both exist in all electric circuits, as current-carrying conductors, such as a piece of wire, create a magnetic field around the conductor, and the potential difference between any two points in the circuit, the voltage, creates an electric field. Energy is contained in both the electric and magnetic fields, but in circuits this field energy is usually completely returned to the circuit upon the field's collapse. If the field does not return its full energy to the circuit, it is considered that part of the wave was released, or radiated, from the electrical circuit. Such radiated energy may create interference with other electrical circuits nearby. This radio frequency interference (RFI) can be an undesired radiation from radio transmitters. If it comes from another source, it is called electromagnetic interference (EMI), or noise. Antennas are optimized for operation in specific parts of the electromagnetic spectrum, so that it does not allow the electromagnetic wave energy to collapse back into the circuit.

The principle of antenna reciprocity implies that any antenna transfers energy from the transmitter into the atmosphere with the same efficiency that it transfers energy from the atmosphere to its terminals. This occurs because an antenna's characteristics are identical it sends or receives electromagnetic energy. Because of reciprocity, one generally characterizes antennas from the perspective of their ability to transmit RF signals, considering that the same principles apply equally when the component is used instead to receive electromagnetic energy.

As antennas must produce or collect electromagnetic energy, they consist of conductors arranged in a configuration that increases their efficiency in doing so. This requires that the receiving antenna have the same polarization of the electric field as the transmitting antenna. Recall that this refers to the direction of the Electric field component of the electromagnetic field. The antenna's physical structure strongly influences this polarization. For example, a vertical antenna generates and receives vertically polarized waves; horizontally polarized signals could not be received through such a vertical antenna.

The electric field strength of an antenna represents its received signal strength. This is inversely proportional to the distance from the transmitter, as given by the Maxwell equation:

where E represents the strength of the electric field (in volts per meter), r is the distance from the point source, and Pt is the originally transmitted power in watts. The received field strength can be found by dividing the induced signal (in volts) at an antenna by it length (in meters).

The main properties of a good antenna are:

  • Good radiation efficiency
  • Good match to the signal input antenna (transmitter) / output antenna (receiver)
  • Good radiation pattern appropriate to the application

Passive parameters include:

  • Gain / Peak gain
  • Bandwidth
  • Polarization (circular, vertical, horizontal)
  • Return loss (S11)
  • Isolation (S21)
  • Total efficiency (htot)

When implementing an antenna in your IoT device, there are many factors to consider in order to ensure proper performance. The use of a LC matching circuit for impedance matching, implementation of tuning pads for maximizing radiation efficiency, and the careful consideration of the PCB size's impact on antenna efficiency are all required.

Electromagnetic Spectrum

Electrically charged particles accelerating in a circuit between two points of different electric potential, or voltage, emit electromagnetic waves. These are synchronized oscillations of an electric and magnetic field, linear- (or plane-) polarized, such that the electric field vector (E) and magnetic field vector (B) are confined to two different planes, offset 90° to each other, along the direction of propagation (V). These electromagnetic waves travel at the speed of light in a vacuum, represented by the constant "c."

Figure: Electromagnetic Wave

Electromagnetic waves are typically described by the following physical properties:

  • Frequency f, the number of occurrences of a wave cycle per second, in hertz (Hz);
  • Wavelength λ, the distance (in meters) between consecutive corresponding points of the same phase of the wave;
  • Photon energy E, the energy carried by a single photon, in electronvolts (eV).

The relationship between these properties is given by the following equations:

where:

  • c = 299792458 m/s is the speed of light in a vacuum;
  • h = 6.62607015×10−34 J·s = 4.13566733(10)×10−15 eV·s is Planck's constant.

The inverse relationship between an electromagnetic wave's frequency and its wavelength means that as its frequency increases, the wavelength correspondingly decreases. If the wavelength is smaller than the dimensions of the obstacle it faces, it will get reflected by the obstacle. This explains why electromagnetic radiation at higher frequencies is more susceptible to reflection. Near-communication technologies such as WiFi 802.11ad, ZigBee, Bluetooth with wavelengths of 1 - 10 centimeters reflect off of common, everyday small-scale objects in the home, and do not travel as far as FM radio waves of 3.5 meters, which can simply pass through these.

Visible light is an electromagnetic wave at about 5×10−14 Hz, while usable radio waves extend from about 1.5×104 Hz up to 3×1011 Hz. The human eye is only able to perceive visible light's very narrow range of electromagnetic frequencies, and is therefore blind to radio waves, infrared, ultraviolet, and microwaves.

The electromagnetic spectrum, as illustrated in the figure below, consists of the range of frequencies (or spectrum) of electromagnetic radiation, with the respective wavelengths and photon energies. Common communication media such as radio, 3GPPTM Wide Area Network, and Local Area Network technologies are located at different points in the electromagnetic spectrum, ranging from 3×106 Hz (Megahertz) for the former, up to 66×109 Hz (Gigahertz) for the latter.

Figure: Electromagnetic Spectrum

Impedance Matching

Impedance is the effective resistance of an electric circuit to alternating current. It measures the electrical circuit' ability to transfer energy efficiently from the signal source into the transmission line, and from there into the load. It can negatively impact circuit performance by creating signal reflections along the path between the source and the load. In the context of antennas (here, acting as the load), such reflections are minimized when the impedance along the transmission line is "matched," meaning the antenna's input impedance (ZL) matches the corresponding RF circuitry's output impedance (Zc), which would be 50Ω in most cases. This yields a desired low Voltage Standing Wave Ratio (VSWR), which transfers the maximum amount of power from the RF circuit to the antenna. A perfect impedance match is obtained when ZL = Zc, which gives |S11|(𝑑𝐵) = 0.

The industry selected 50Ω as the standard transmission line impedance as a compromise between insertion losses and power transfer efficiency in transmission lines. 77.5Ω is the point where the insertion loss of traces vs. impedance is at a minimum attenuation. Likewise, the point where maximum power handling capacity vs. impedance peaks at is 30Ω. For this reason, 50Ω was selected as the reference characteristic impedance to use generically.

Impedance matching poses a serious challenge in RF and microwave circuit designs because the margin of error decreases as frequency increases. This is due to the fact that the alternating current's wavelength decreases with increasing frequency, resulting in noticeable differences in voltage at different points along short transmission lines.

High-speed digital circuits require stable and controlled impedances to avoid increasing bit error rates, pulse distortion, reflection, and electromagnetic interference. Standing waves and reflected signals propagating back and forth within the transmission line, interfere and mix with transmitted signals, causing data jitter and a reduction in the signal-to-noise ratio (SNR).

Figure: Impedence Matching

"Γ" represents the complex reflection coefficient for an impedance ZL attached to a transmission line with characteristic impedance Z0. The impedance matching target |S11|(𝑑𝐵) < -10dB (VSWR < 2), generally < -6dB for IoT devices.

Source: KYOCERA AVX

https://www.kyocera-avx.com/products/antennas/

Matching Circuits

The impedance matching for an antenna requires the transmitter output impedance to be 50Ω. This is achieved by placing an LC circuit at the end of the transmission line, just before the antenna connector. LC circuits - also known as resonant or tuning circuits - are composed of inductors and capacitors, both of which are passive components that store and release energy:

  • Inductors, represented by the letter "L" for their property of inductance, measured in henries, store electrical energy in the form of magnetic energy. Basically, it uses a conductor that is wound into a coil, and when electricity flows into the coil from the left to the right, this will generate a magnetic field in the clockwise direction.
  • Capacitors, represented by the letter "C" for their property of capacitance, measured in farads, store electrical energy in the form of an electrical charge producing a potential difference (static voltage) between its conductive plates, similar to a battery. A dielectric material forms an insulating layer between a capacitor's plates.
  • Small amounts of energy are always dissipated or lost in LC circuits, which is why these circuits often include a resistor, measured in ohms, represented by the letter "R" for its property of resistance.

LC circuits can oscillate with minimal damping, such that the resistance is as low as possible. Such circuits are optimal for use within tuning radio transmitters and receivers, optimizing their resonance for the particular carrier frequency to be used.

Smith Charts can be used to graphically represent the impedance of an antenna, showing the complex reflection coefficient, in polar form, as determined by the load impedance (ZL) and the impedance of the transmitter (Z0) - 50Ω. The impedance of an antenna can be either a single point on the chart for a specific frequency, or a range of points along a line, displaying the antenna impedance as a function of frequency. The center of the Smith Chart is the point where the reflection coefficient is zero, i.e., the only point where no power is reflected by the load impedance. The outer rim of the Smith Chart is where the reflection coefficient is 1, i.e., where all of the power is reflected by the load impedance. With a Smith Chart it is possible to:

  • Determine the decreasing series inductance or increasing shunt inductance which can match a specific load impedance ZL,
  • Determine the increasing series capacitance or decreasing shunt capacitance that cancels the reactance of the same load impedance.

Figure: Smith Chart

Figure: Increasing / decreasing capacitance and inductance

Figure: Impedance matching with LC circuits

Source: KYOCERA AVX

https://www.kyocera-avx.com/products/antennas/

Maximizing Radiation Efficiency

In addition to a matching network, the antenna layout may require the use of tuning pads to optimize the antenna length for different environments and maximize the performance for each application.

Figure: Antenna Fine Tuning

Such tuning pads shift the efficiency v. frequency response of the antenna to center it on the specific frequency of transmission. This helps to maximize the radiation efficiency of the IoT device.

Figure: Antenna Tuning Pads (example)


Source: KYOCERA AVX

https://www.kyocera-avx.com/products/antennas/

Total Radiated Power

Total Radiated Power (TRP) is a measure of how much power is radiated by an antenna the antenna is connected to an actual radio (or transmitter), in dBm. TRP is an active measurement, in that a powered transmitter is used to transmit through the antenna. The power radiated in all direction is measured and summed in order to calculate the TRP. The ratio of the TRP vs. the output of the transmitter should normally match (or be extremely close to) the antenna efficiency value (h). The TRP value is represented as such:

Total Radiated Power (TRPdBm) = Conducted PowerdBm + hdB

The TRP value differs from the Effective Isotropic Radiated Power (EIRP), as follows:

Effective Isotropic Radiated Power (EIRPdBm) = Conducted PowerdBm + Peak Gain

Certified antenna laboratories and leading suppliers like AVX/Ethertronics offer TRP/TIS measurements in their design centers worldwide for 4G, NB-IoT and LTE Cat-M.

Many device manufacturers must guarantee a specific maximum TRP in order to ensure that their product complies with mobile network operator (MNO) quality requirements. Having an insufficient TRP may mean that the link budget requirements for terminal transmit power are not fulfilled. Such devices may be out of coverage in areas where other terminals are able to successfully communicate information with the mobile network. This parameter affects the "Uplink" performance of the 3GPPTM protocol.

Below you can find a list of reference mobile network operator TRP requirements (Deutsche Telekom) for 2G, 3G, 4G, 5G-NR. Please consult your service provider for any local requirements that they may have.

Besides Head, in Left/Right Hand:

  • GSM850 >= 20 dBm
  • GSM900 >= 20 dBm
  • GSM1800 >= 21 dBm
  • GSM1900 >= 21 dBm
  • UMTS Band I >= 15 dBm
  • UMTS Band II >= 16.5 dBm
  • UMTS Band V >= 11 dBm
  • UMTS Band VIII >= 11 dBm
  • LTE Band 1 >= 13.5 dBm
  • LTE Band 2 and Band 25 >= 13.5 dBm
  • LTE Band 3 >= 13.5 dBm
  • LTE Band 4 and Band 66 >= 13.5 dBm
  • LTE Band 5 and Band 26 >= 9.8 dBm
  • LTE Band 7 >= 13.5 dBm
  • LTE Band 8 >= 9.8 dBm
  • LTE Band 12 and Band 17 >= 9.8 dBm
  • LTE Band 20 >= 9.8 dBm
  • LTE Band 28 >= 9.8 dBm
  • LTE Band 38 >= 13.5 dBm
  • LTE Band 39 >= 13.5 dBm
  • LTE Band 41 >= 13.5 dBm
  • LTE Band 71 >= 9.8 dBm

In Left/Right Hand:

  • GSM850 >= 24 dBm
  • GSM900 >= 25 dBm
  • GSM1800 >= 23 dBm
  • GSM1900 >= 23 dBm
  • UMTS Band I >= 17 dBm
  • UMTS Band II >= 17 dBm
  • UMTS Band V >= 14.7 dBm
  • UMTS Band VIII >= 15 dBm
  • LTE Band 1 >= 15.5 dBm
  • LTE Band 2 and Band 25 >= 15.5 dBm
  • LTE Band 3 >= 15.5 dBm
  • LTE Band 4 and Band 66 >= 15.5 dBm
  • LTE Band 5 and Band 26 >= 14.3 dBm
  • LTE Band 7 >= 15.5 dBm
  • LTE Band 8 >= 14.3 dBm
  • LTE Band 12 and Band 17 >= 14.3 dBm
  • LTE Band 20 >= 14.3 dBm
  • LTE Band 28 >= 14.3 dBm
  • LTE Band 38 >= 15.5 dBm
  • LTE Band 39 >= 15.5 dBm
  • LTE Band 41 >= 15.5 dBm
  • LTE Band 71 >= 14.3 dBm
  • NR Band n1 >= 15.5 dBm
  • NR Band n3 >= 15.5 dBm
  • NR Band n7 >= 15.5 dBm
  • NR Band n28 >= 14.3 dBm
  • NR Band n38 >= 15.5 dBm
  • NR Band n78 >= 15.5 dBm

Wrist-bound Wearables:

  • GSM900 >= 18 dBm
  • GSM1800 >= 19 dBm
  • UMTS Band I >= 13 dBm
  • UMTS Band VIII >= 9 dBm
  • LTE Band 1 >= 11.5 dBm
  • LTE Band 3 >= 11.5 dBm
  • LTE Band 7 >= 11.5 dBm
  • LTE Band 20 >= 7.8 dBm

Free Space:

  • GSM850 >= 27 dBm
  • GSM900 >= 28 dBm
  • GSM1800 >= 26 dBm
  • GSM1900 >= 26 dBm
  • UMTS Band I >= 20 dBm
  • UMTS Band II >= 20 dBm
  • UMTS Band V >= 17.7 dBm
  • UMTS Band VIII >= 18 dBm
  • LTE Band 1 >= 18.5 dBm
  • LTE Band 2 and Band 25 >= 18.5 dBm
  • LTE Band 3 >= 18.5 dBm
  • LTE Band 4 and Band 66 >= 18.5 dBm
  • LTE Band 5 and Band 26 >= 18.0 dBm
  • LTE Band 7 >= 18.5 dBm
  • LTE Band 8 >= 18 dBm
  • LTE Band 12 and Band 17 >= 18.0 dBm
  • LTE Band 20 >= 18 dBm
  • LTE Band 28 >= 18 dBm
  • LTE Band 38 >= 18.5 dBm
  • LTE Band 39 >= 18.5 dBm
  • LTE Band 41 >= 18.5 dBm
  • LTE Band 71 >= 18.0 dBm
  • NR Band n1 >= 18.5 dBm
  • NR Band n1 >= 18.5 dBm for NSA with LTE anchor on low bands (B8, B20, B28)
  • NR Band n1 >= 18.5 dBm for NSA with LTE anchor on mid and high band (B3, B7)
  • TRP FS NR Band n3 >= 18.5 dBm
  • TRP FS NR Band n3 >= 18.5 dBm for NSA with LTE anchor on low band (B8, B20, B28)
  • TRP FS NR Band n3 >= 18.5 dBm for NSA with LTE anchor on mid and high band (B1, B7)
  • TRP FS NR Band n7 >= 18.5 dBm
  • TRP FS NR Band n7 >= 18.5 dBm for NSA with LTE anchor on low band (B8, B20, B28)
  • TRP FS NR Band n7 >= 18.5 dBm for NSA with LTE anchor on mid band (B1, B3)
  • TRP FS NR Band n28 >= 18 dBm
  • TRP FS NR Band n28 >= 18 dBm for NSA with LTE anchor on low band (B8, B20)
  • TRP FS NR Band n28 >= 18 dBm for NSA with LTE anchor on mid and high band (B1, B3, B7)
  • TRP FS NR Band n38 >= 18.5 dBm
  • TRP FS NR Band n38 >= 18.5 dBm for NSA with LTE anchor on low band (B8, B20)
  • TRP FS NR Band n38 >= 18.5 dBm for NSA with LTE anchor on mid band (B1, B3)
  • TRP FS NR Band n78 >= 18.5 dBm
  • TRP FS NR Band n78 >= 18.5 dBm for NSA with LTE anchor on any band (B1, B3, B7, B8, B20, B28, B38)

Note: Other TRP requirements may apply for NarrowBand IoT (NB-IoT) and LTE-M (eMTC).

Total Isotropic Sensitivity

Total Isotropic Sensitivity (TIS) is a commonly quoted specification in the industry. TIS is an active measurement, and the antenna must be connected to the transmitter. The minimum power that allows the device to work (at a given error rate) define the minimum sensitivity. The minimum sensitivity in all direction is then measured and sum in order to compute the TIS. The ratio of the TIS vs. the chipset’s sensitivity should normally match (or be extremely close to) the antenna efficiency value. The TIS value is represented as such:

Total Isotropic Sensitivity (TISdBm) = Conducted Sensitivity + hdB (ideally)

Certified antenna laboratories and leading suppliers like AVX/Ethertronics offer TRP/TIS measurements in their design centers worldwide for 4G, NB-IoT and LTE Cat-M.

Many device manufacturers must guarantee a specific minimum TIS in order to ensure that their product complies with mobile network operator (MNO) quality requirements. Having an insufficient TIS may mean that the link budget requirements for terminal receiving sensitivity are not fulfilled. Such devices may be out of coverage in areas where other terminals are able to successfully receive information from the mobile network. This parameter affects the "Downlink" performance of the 3GPPTM protocol.

Below you can find a list of reference mobile network operator TIS requirements (Deutsche Telekom) for 2G, 3G, 4G, 5G-NR. Please consult your service provider for any local requirements that they may have.

Besides Head, in Left/Right Hand:

  • GSM850 <= -97 dBm
  • GSM900 <= -95 dBm
  • GSM1800 <= -99 dBm
  • GSM1900 <= -99.5 dBm
  • UMTS Band I <= -101 dBm
  • UMTS Band II <= -98.5 dBm
  • UMTS Band V <= -94.5 dBm
  • UMTS Band VIII <= -96 dBm
  • LTE Band 1 <= -89 dBm
  • LTE Band 2 and band 25 <= -89 dBm
  • LTE Band 3 <= -89 dBm
  • LTE Band 4 and band 66 <= -89 dBm
  • LTE Band 5 and band 26 <= -85 dBm
  • LTE Band 7 <= -89 dBm
  • LTE Band 8 <= -85 dBm
  • LTE Band 12 and band 17 <= -85 dBm
  • LTE Band 20 <= -85 dBm
  • LTE Band 28 <= -85 dBm
  • LTE Band 38 <= -89 dBm
  • LTE Band 39 <= -89 dBm
  • LTE Band 41 <= -89 dBm
  • LTE Band 71 <= -85 dBm

In Left/Right Hand:

  • GSM850 <= -99 dBm
  • GSM900 <= -99 dBm
  • GSM1800 <= -100 dBm
  • GSM1900 <= -100 dBm
  • UMTS Band I <= -103 dBm
  • UMTS Band II <= -101 dBm
  • UMTS Band V <= -100 dBm
  • UMTS Band VIII <= -101 dBm
  • Band 1 <= -91 dBm
  • Band 2 and Band 25 <= -91 dBm
  • Band 3 <= -91 dBm
  • Band 4 and Band 66 <= -91 dBm
  • Band 5 and Band 26 <= -89.5 dBm
  • Band 7 <= -91 dBm
  • Band 8 <= -89.5 dBm
  • Band 12 and Band 17 <= -89.5 dBm
  • Band 20 <= -89.5 dBm
  • Band 28 <= -89.5 dBm
  • Band 32 <= -91 dBm
  • Band 38 <= -91 dBm
  • Band 39 <= -91 dBm
  • Band 41 <= -91 dBm
  • Band 71 <= -89.5 dBm
  • NR Band n1 <= -91 dBm (10 MHz BW, SCS 15 kHz)
  • NR Band n3 <= -91 dBm (10 MHz BW, SCS 15 kHz)
  • NR Band n7 <= -91 dBm (10 MHz BW, SCS 15 kHz)
  • NR Band n28 <= -89.5 dBm (10 MHz BW, SCS 15 kHz)
  • NR Band n78 <= -91 dBm (10 MHz BW, SCS 30 kHz)
  • NR Band n78 <= -88 dBm (20 MHz BW, SCS 30 kHz)

Wrist-bound Wearables:

  • GSM900 <= -92 dBm
  • GSM1800 <= -96 dBm
  • UMTS Band I <= -98 dBm
  • UMTS Band VIII <= -93 dBm
  • LTE Band 1 <= -86 dBm
  • LTE Band 3 <= -86 dBm
  • LTE Band 7 <= -86 dBm
  • LTE Band 20 <= -82 dBm

Free Space:

  • GSM850 <= -103 dBm
  • GSM900 <= -103 dBm
  • GSM1800 <= -104 dBm
  • GSM1900 <= -103 dBm
  • UMTS Band I <= -106 dBm
  • UMTS Band II <= -104 dBm
  • UMTS Band V <= -103 dBm
  • UMTS Band VIII <= -104 dBm
  • LTE Band 1 <= -94 dBm
  • LTE Band 2 and Band 25 <= -94 dBm
  • LTE Band 3 <= -94 dBm
  • LTE Band 4 and Band 66 <= -94 dBm
  • LTE Band 5 and Band 26 <= -93.5 dBm
  • LTE Band 7 <= -94 dBm
  • LTE Band 8 <= -93.5 dBm
  • LTE Band 12 and Band 17 <= -93.5 dBm
  • LTE Band 20 <= -93.5 dBm
  • LTE Band 28 <= -93.5 dBm
  • LTE Band 32 <= -94 dBm
  • LTE Band 38 <= -94 dBm
  • LTE Band 39 <= -94 dBm
  • LTE Band 41 <= -94 dBm
  • LTE Band 71 <= -93.5 dBm
  • NR Band n1 <= -94 dBm (10 MHz BW, SCS 15 KHz)
  • NR Band n3 <= -94 dBm (10 MHz BW, SCS 15 KHz)
  • NR Band n7 <= -94 dBm (10 MHz BW, SCS 15 KHz)
  • NR Band n28 <= -93.5 dBm (10 MHz BW, SCS 15 KHz)
  • NR Band n38 <= -94 dBm (10 MHz BW, SCS 15 KHz)
  • NR Band n78 <= -94 dBm (10 MHz BW, SCS 30 KHz)
  • NR Band n78 <= -91 dBm (20 MHz BW, SCS 30 KHz)

LTE/NR antenna correlation (ECC) when measured under freefield conditions is equal or better than 0.5. For devices with more than 2 antennas this applies to all antenna pairs actively used for one frequency.

Note: Other TIS requirements may apply for NarrowBand IoT (NB-IoT) and LTE-M (eMTC).

PCB Size Impact on Antenna Performance

The PCB size and antenna placement are the most important factors for embedded antenna performance. In general, the smaller the board size, the lower the lower frequency band performance will be. This phenomenon usually affects constrained devices operating in LTE Band 20 (800MHz) and Band 8 (900MHz), e.g. smaller form factor NB-IoT and LTE-M devices.

Figure: Antenna Performance Affected by PCB Length, L

With the IoT Solution Optimizer, customers can model the impact to antenna efficiency of their placement of specific antenna components at different locations on PCBs of varying sizes. The antenna efficiency loss is modeled for the LTE frequency of the selected mobile network operator.

Source: KYOCERA AVX

https://www.kyocera-avx.com/products/antennas/

Passive Antennas

Most antennas are considered to be passive because the only contain electrically passive components, such as metal rods, capacitors, and inductors. Passive components do not offer any added power in the link budget to an electrical signal; instead, they simply redirect all received energy in a specific direction. The properties of passive antennas are bidirectional (from the point of view of the transmitting and received signals).

Significant RF design experience is required to properly design a passive antenna that fulfills a project's technical requirements. It is strongly recommended that customers work closely with their antenna suppliers for detailed integration guidance.

Active Antennas

Active antennas are solutions that always combine a passive antenna element, an active components such as RF switches, diodes or transistors, and driver or software to control the circuitry. There are different types of active antennas, depending on which parameter is actively changing:

  • Frequency (band switching/aperture tuning)
  • Bandwidth (band switching/aperture tuning or impedance matching)
  • Antenna impendence (impedance matching)
  • Radiation patterns (Active SteeringTM, technology patented by Ethertronics).

With an active antenna it is possible to implement applications needing band switching / aperture tuning. IoT devices tend to have small size, and therefore the performance (bandwidth/efficiency) of embedded antennas can be strongly degraded. By using an active antenna solution such as band switching/aperture tuning, it is possible to cover a wider frequency range by actively switching bands. For the same number frequency bands to be covered, the active antenna will have a smaller size compared to the passive antenna. At equal size, the active antenna will cover more frequency bands than the passive antenna.

Figure: Active Antenna

This technique can be implemented for example using the Kyocera-AVX EC646 RF switch with Ether Switch & TuneTM technology, together with the standard antenna 1004795 or a custom design. The chip provides a broader global band coverage with a single antenna element. It achieves this by using parasitic loading and active tuning techniques, especially to meet the stringent low band antenna efficiency requirements. Combining extensive antenna systems expertise and proprietary algorithms, the RF band switching seamlessly adjusts the characteristics of a wireless antenna to:

  • Cover NB-IoT and/or LTE-M bands
  • Retune the antenna for frequency shifts
  • Reduce the antenna's physical volume by up to 50%, without performance trade-offs
  • Help to avoid high losses from the switch due to low on-resistance (Ron)

Figure: EC646 High Performance SP4T Antenna

Source: KYOCERA AVX

https://www.kyocera-avx.com/products/antennas/

Laser Direct Structuring

Laser Direct Structuring (LDS) technology involves the creation of a custom antenna that exhibits a specific form factor and yet meets the requirement performance criteria. The manufacturing process of LDS antennas consist of the following:

  • Step 1: An LDS-suitable resin is loaded with additives and molded to the desired surface shape with traditional molding tools.
  • Step 2: The laser processing machine is preloaded with a 3D pattern of the needed shape.
  • Step 3: The laser is turned On and follows the needed shape's pattern, activating the additives in the molded part, and leaving a visible "burnt" trace.
  • Step 4: The plating process deposits any combination of Copper (Cu), Nickel (Ni), Silver (Ag), and/or Gold (Au) over the lasered area to create an RF trace.

Figure: Pictures of a Production using LDS

Key benefits of LDS technology include:

  • Smaller and thinner devices
  • More curves and less 3D volume available (conformal designs needed)
  • Design freedom for industrial designs
  • Making prototypes identical to mass production parts in shape and in conductivity using same machines
  • Design flexibility
  • Different plating finish
  • Added value painting process for cosmetics
  • Possible for SMT, LED, and matching components (cable soldering)
  • FULL 3D technology

Source: KYOCERA AVX

https://www.kyocera-avx.com/products/antennas/

IoT Device Components (Hardware & Software) Wireless Communication Modules

3GPPTM wireless communication modules are small electronic embedded systems packaging the radio chipset together with radio front-end and peripherals. Integrators widely use radio modules to avoid the difficulty of a direct integration of the radio chipset. The sensitivity of radio circuits, as well as the accuracy of layouts and components required to achieve operation on a specific frequency, mean that only large enterprises with significant investment opt to integrate chipsets directly. Furthermore, due to the legal regulations imposed on radiated emissions, radio circuits are usually subject to conformance testing and certification by a standardization bodies such as ETSI or the Federal Communications Commission (FCC). Radio modules are steered via an AT Command interface.

Figure: Radio Chipset and Module Functionalities

Examples of 3GPPTM wireless communication module components:

  • Switch Mode Power Supply: The switch mode power supply converts a wide range of input voltages, as typically provided by a battery with its constantly changing state of charge, to one or more precisely regulated output voltages as required by the wireless communication chipset
  • Radio Frequency (RF) Transceiver: Transmitter and receiver that processes received signals from the antenna; it also contains power amplification and filtering stages for transmitting
  • GNSS Receiver: Some modules integrate a receiver for Global Navigation Satellite Systems like GPS, Galileo, GLONASS, and BeiDou, often all of them are supported
  • MCU: Some modules integrate a Micro Controller Unit allowing to run applications on the module. This saves space and integration effort for the device manufacturer when compared to a separate MCU chip
  • Level of integration: In general, different wireless communication chipsets provide different levels of integration. A component, e.g. flash memory, could be an external component on a module with a chipset from vendor A, while it is an integral part of chipsets made by vendor B.

As shown in the figure below, modules support all

3GPPTM and Transport Layer protocols required to communicate with the Mobile Network Operator's infrastructure. That said, the support for specific Messaging/Management IoT protocols are usually not available natively in the communication module. In such cases, the application developer needs to implement their own stack to communicate with the IoT service platforms.

Figure: Protocol Support in Different Solution Layers

Wireless Communication Chipsets

3GPPTM wireless communication chipsets are the key element in IoT devices that allow for communication with the MNO’s radio access network (eNodeB). The protocol stack and baseband elements of a radio transmitter and receiver (transceiver) are implemented within this component. It usually includes processors for the different supported WAN and LAN radio protocol interfaces, internal memory, GPIOs, resource and power management, connectivity interfaces (e.g. I2C, SPI, UART), receive Analog-to-Digital Converter (ADC), and transmitter Digital-to-Analog Converter (DAC).

Figure: 3GPPTM Wireless Communication Chipset

Examples of 3GPPTM wireless communication chipset components:

  • WAN and/or LAN communication: Cellular, wireless, or wired for LAN and WAN
  • Central Processing Unit (CPU): Executes the program code for the NB-IoT protocol stack
  • Digital Signal Processor (DSP): Processing unit optimized for calculations associated with modulating/demodulating and encoding/decoding digital signals
  • Flash Memory: Non-volatile memory for storing settings and data that are needed to survive a reboot
  • Digital Frontend: Performs analog-to-digital conversion of received signals and digital-to-analog conversion for signals to be transmitted
  • Real-time Clock (RTC), Power Management Unit (PMU), Universal Asynchronous Receiver Transmitter (UART) interface, Serial Peripheral Interface (SPI), General Purpose I/O (GPIO): Peripheral components providing time and date, power management communication with external components, etc.
  • (Optional) Global Navigation Satellite System (GNSS): Provides for positioning via GPS, GLONASS, BeiDou, and Galileo satellite systems.
Microcontroller Units (MCU)

Microcontroller Units (MCU) are highly-integrated small-form factor computers, usually implemented on a single Metal-Oxide-Semiconductor (MOS) integrated circuit (IC). Although similar to "Systems on a Chip" (SoC), SoCs usually include additional components alongside an MCU. One or more CPUs (Central Processor Unit cores), memory, and programmable input/output peripherals are packaged into an MCU. Their memory comes in the form of ferroelectric RAM, NOR flash, or OTP ROM may also be included, as well as a small quantity of RAM.

Microcontrollers are designed for use on embedded devices and automatically-controlled products and devices, such as constrained IoT devices. This is different than more powerful microprocessors used in PCs.

With its reduced size and cost, sufficient memory, and input/output ports, MCUs are economical platforms to run IoT device applications managing assets on small IoT devices. While the term "IoT device application" may be used to the refer to the entire IoT service, including IoT device and server, with the connectivity over a WAN network, such as 3GPPTM, the application running on the IoT device itself is usually located in the MCU. Some advanced wireless communication modules include an entire SoC, meaning that the MCU is integrated into the module's package. MCUs can also commonly handle analog or fixed-signal inputs/outputs, allowing for the management of analog systems on the device. In order to conserve power and run at single-digit milliwatts or microwatts, microcontrollers may use 4-bit words, and operate at frequencies as low 4 kHz. While sleeping, the MCU turns its CPU clock and most peripherals OFF, resulting in a nanowatt power consumption. While waiting for specific triggered events from a sensor or actuator, they are able to retain specific functionalities. In other cases, the MCU may need to play a performance-critical role, acting like a Digital Signal Processor (DSP), with higher clock speed and power consumption.

Sensors

Sensors (also known as "detector") are the subsystem within the IoT device whose purpose is to detect events or changes in the asset being monitored and send the information to application running in the microcontroller (MCU). In a broader context, as the IoT device provides for "sensory" functionality, the name of this critical subsystem component is often used to generically describe the far more complex IoT device that hosts it. Traditional fields where sensors are deployed is in the monitoring of temperature, pressure, or flow measurement. For industrial and scientific purposes, there also sensors that can measure chemical and physical properties of materials. Optical sensors measure Refractive index, vibrational sensors are used to determine fluid viscosity, and electro-chemical sensor can monitor fluid pH. Alongside the ever-growing spectrum of digital sensors, analog sensors are still in widespread use, common examples being potentiometers and force-sensing resistors.

A sensor's sensitivity indicates how much the sensor's output changes when the input measured quantity changes. Sensors are generally designed to have negligible effects on the asset which is measured. Parallel to the advances in microelectronics, an increasing number of sensors can be manufactured on a microscopic scale. Known as "microsensors," they have an ability to detect minute and state changes in the asset and alert the IoT application.

An exhaustive, yet incomplete, list of sensors used in IoT devices is provided below:

  • 3D sensor (e.g. laser, radar)
  • Accelerometer sensor
  • Acoustic sensor
  • Air Particle sensor
  • Breathalyzer
  • Carbon Dioxide sensor
  • Carbon Monoxide detector
  • Catalytic Bead sensor
  • Chemical sensor
  • Chlorine Residual sensor
  • Corrosion sensor
  • Electrochemical Gas sensor
  • Flooding sensor
  • Flow sensor
  • Fluorescent Chloride sensor
  • Gyroscope sensor
  • Humidity sensor
  • Hydrogen sensor
  • Hydrogen Sulfide sensor
  • Hygrometer
  • Image sensors (e.g. CCD, CMOS)
  • Infrared sensor
  • Laser sensor
  • Level sensor
  • Magnetic sensor
  • Metal detector
  • Motion detector
  • Nitrogen Oxide sensor
  • Oxygen sensor
  • Oxygen-Reduction sensor
  • Ozone monitor
  • pH sensor
  • Photodetector
  • Potential sensor
  • Presence and Proximity sensor
  • Pressure sensor
  • Pyrometer
  • Seismic sensor
  • Smoke sensor
  • Soil Humidity sensor
  • Thermal sensor
  • Total Organic Carbon sensor
  • Vibration sensor
  • Water Quality sensor
  • Water Consumption sensor
Actuators

Actuators (also referred to as "movers") are component within an IoT device that can move, manipulate, or activate an asset. The asset in question may be another component, a coupled mechanism, or a system. In order to perform its tasks, the actuator requires a source of energy (typically the device's battery or a power supply) and a low-energy control signal. In the context of IoT devices, this energy source and control signal are electric voltage or current. Actuators convert the energy source into a mechanical motion to act upon its environment. The control system is usually the application running on the IoT device's microcontroller (MCU).

There are numerous types of actuators, but most can be classified as hydraulic, pneumatic, electric, thermal, magnetic, twisted and coiled polymer (TCP), supercoiled polymer (SCP), or mechanical actuators. Some common examples of actuators used in IoT devices include:

  • Electric motors
  • Electric valves
  • Electroactive polymers
  • Hydraulic cylinders
  • Servomotors
  • Solenoids
  • Spring valves
  • Stepper motors
GNSS Systems Global Positioning System (GPS)

Global Positioning System (GPS) is one of several Global Navigation Satellite Systems (GNSS) in existence today. It was launched in the late 1970s, and uses a constellation of 27 satellites to provides for global coverage. GPS satellites orbit at a height of 20,000 km above the earth.

GPS signals, for instance, are transmitted on the L1, L2 and L5 frequencies, and modulated using CDMA-based techniques. This allows the UE receiver to accurately identify the time-shift in the repeated code sequence (and hence distance), as well as recover the highly attenuated satellite signal from the noise floor. The UE receiver calculates the range to each satellite and predicts its coordinates on earth. It is also programmed to advance or delay its clock until the pseudo-ranges of all four satellites converge at a single point, for high accuracy. Note that GPS UE receivers will only consider the earliest-arriving signals, ignoring multipath signals caused by reflections in the atmosphere. The effects of temperature, pressure, and relative humidity in the troposphere may also delay the propagation of the signals to sub-light speed velocities. Finally, the ionosphere higher above typically interferes with GPS signals in a frequency-dependent manner. By calculating the range to each satellite by using both L1 and L2, the UE receiver can virtually eliminate the effect of the ionosphere. Note that other satellite systems use different modulation schemes; although BeiDou and GPS use CDMA schemes, GLONASS uses FDMA-based modulation.

GPS satellites’ L1 signal includes a navigation message, the publicly available Coarse Acquisition C/A code, and an encrypted precision P(Y) code (restricted to the public). The low bit-rate navigation message includes the following:

  • GPS date and time
  • Satellite status and healthiness
  • Satellite ephemeris data, which allows the receiver to calculate the satellite’s position
  • Almanac, optional data which contains information and status for all GPS satellites, so receivers know which satellites are available for tracking. During its booting sequence, the UE receivers recovers the almanac, which consists of coarse orbit and status information for each satellite in the constellation.

The ephemerides and almanac information are also available online. IoT devices supporting Assisted-GPS (A-GPS) use data connectivity over the 3GPPTM network to download the latest files, thereby saving much time and avoiding a download from the satellite.

The P(Y) code provides for better interference rejection than the C/A code. It is therefore reserved for military use, making military GPS more robust than civilian GPS.

The L2 signal transmits the P(Y) code as well as the L2C C/A code – a second, publicly available code for civilian use.

Many of the commercially available GNSS solutions have a multi-constellation receiver. This means it can access signals from several systems, GPS, GLONASS, BeiDou, and Galileo for instance. By complementing GPS with additional constellations, a larger number of satellites are in the field of view, resulting in many benefits:

  • Reduced signal acquisition time
  • Improved position and time accuracy
  • Reduced effects caused by obstructions in the line of sight
  • Improved spatial distribution of visible satellites

GNSS UE which utilize signals from a variety of constellations have built-in redundancy. If signals from one constellation are not possible to demodulate, the receiver can switch to an alternate constellation—ensuring continuity of positioning. While it is in multi-constellation mode, the receiver tracks at least five satellites, one of which must be from the second constellation. The receiver is therefore able to determine the time-offset between constellations.

Global Navigation Satellite System (GNSS)

Global Navigation Satellite Systems (GNSS) are satellite-based positioning systems that allow assets to be located with high precision. There are several GNSS technologies available today:

  • GPS (United States): GPS was the first GNSS system, launched in the late 1970s. It uses a constellation of 27 satellites and provides for global coverage, orbiting at a height of 20,200 km.
  • BeiDou (China): BeiDou is the Chinese navigation satellite system, consisting of 35 satellites operational since December 2012, and orbiting at different heights (35,787 km and 21,528 km) above the earth. BeiDou is slated to provide global coverage by the end of 2020.
  • GLONASS (Russia): GLONASS is operated by the Russian government. Its constellation consists of 24 satellites for global coverage, located at 19,140 km altitude.
  • Galileo (European Union): Galileo is a civil GNSS system operated by the European Global Navigation Satellite Systems Agency (GSA). Galileo uses 27 satellites with the first Full Operational Capability (FOC) satellites launched in 2014. The full constellation should be fully deployed by 2020, and cruise overhead at a Medium Earth Orbit (MEO) of 23,222 km.
  • IRNSS (India): The Indian Regional Navigation Satellite System (IRNSS) provides service to India and the surrounding area, using a full constellation of seven satellites.
  • QZSS (Japan): QZSS is a regional navigation satellite system which provides service to Japan and the Asia-Oceania region, in service since 2018.

Three “segments” are defined for each GNSS system:

  • User Segment: The equipment processing received signals from GNSS satellites and using them to derive and apply location and time information. GNSS positioning is based on a technique of trilateration, an accuracy to within a few hundred centimeters. GNSS user equipment (UE) receives the signals from multiple GNSS satellites, and then recovers the information that each satellite transmitted, to determine the time of signal propagation, and range (direct path from the satellite to the user equipment). This recovered information is used to compute the UE’s time and position. As signals travel at the speed of light, high accuracy of time alignment is required in the system. GPS satellites use rubidium clocks with an accuracy of ±5 parts in 1011. More accurate ground-based cesium clocks synchronize each satellite’s clock. GNSS receivers use this information to synchronize their local clocks with the satellites. Due to their relatively inaccurate quartz crystal clocks, receivers need at least four satellites to determine a fix (position) and time. This implies a line of sight is necessary between the receiver’s antenna and all four satellites. Specific techniques and equipment expect to help improve the accuracy and availability of GNSS position and time information even further. The receiver’s computational power may restrict its ability to make use of additional satellites; generally, the manufacturer may use additional satellites as part of their intellectual property algorithms.
  • Space Segment: This consists of the GNSS satellites, orbiting tens of thousands of kilometers above the ground, and moving at several kilometers per second. Each GNSS system has its own “constellation” of satellites, arranged in predefined orbits that provide for the desired coverage footprint. Each satellite within a GNSS constellation broadcasts an identification signal at around 1.5 GHz which includes their ephemerides (the parameters that define their orbit) and time, as well as their status. Satellite ephemeris information is accurate to many decimal places. UE receivers can use it to determine exactly where the satellite was when the information was transmitted.
  • Control Segment: The ground-based network of master control stations, data uploading stations, and monitoring stations. In order to guarantee high availability and resilience, there are usually primary and back-up control stations, as well as wide geographical distributed of monitoring stations throughout the world. Master control stations adjust each satellite’s ephemerides onboard high-precision clocks whenever necessary to maintain accuracy. Monitor stations, in turn, monitor the constellation’s signals and status, and relay this information to master control stations. The latter analyze the signals and transmit orbit and time corrections to the satellites through the data uploading stations. Satellites are periodically taken out of service to adjust their orbits. These minor drifts are caused by the solar wind and gravitational effects of the earth.

GNSS receivers usually offer multiple modes of operation:

  • Default (Manufacturer settings)
  • Continuous tracing
  • Automatic tracing
  • Host CPU tracing

The IoT Solution Optimizer models the power consumption of processes required to execute these modes of operation.

GNSS Ephemeris Data

Global Navigation Satellite Systems transmit navigation message data from each satellite. Each navigation message consists of 5 subframes, which form one page. The first 3 subframes of the page contain the ephemerides. Approximately 30 seconds (6 seconds/subframe) are required to download these 5 subframes. Ephemeris data is used for real time satellite coordinate computation which is required in position computation.

The ephemeris data is unique to each satellite and contains information on week number, satellite accuracy, and healthiness, age of data, satellite clock correction coefficients, and orbital parameters. It is valid for two hours before and two hours after the “time of ephemeris.”

Unlike the almanac, which is optional and assists with fixing the satellites for the first time, the ephemeris data is critical to maintain up to date for the accurate computation of position.

Ephemeris information is also available online. IoT devices supporting A-GPS use data connectivity over the 3GPPTM network to download the latest satellite ephemerides, thereby saving much time and avoiding a download from the satellite.

GNSS Almanac Data

Global Navigation Satellite Systems transmit navigation message data from each satellite. Each navigation message consists of 5 subframes, which form one page. The last 2 subframes of the page contain almanac data. Approximately 30 seconds (6 seconds/subframe) are required to download the 5 subframes.

The almanac is optional and assists with fixing the satellites for the first time. A full almanac download consists of 25 pages, which requires 12.5 minutes to fully download. The almanac data is the same for all satellites and contains less accurate orbital information than ephemerides. It is valid for a period of up to 90 days and can be used to shorten the time-to-first-fix (TTFF) fix by up to 15 seconds vs. not having the almanac stored.

Almanac information is also available online. IoT devices supporting A-GPS use data connectivity over the 3GPPTM network to download the latest Almanac file, thereby saving much time and avoiding a download from the satellite.

Cold Start vs. Hot Start

Time-to-first-fix (TTFF) refers to the time needed by the GNSS user equipment (UE) receiver to perform the first position fix, starting from the moment it is booted ON. There are three different TTFF scenarios, depending on the receiver’s status when the acquisition cycle is started, as well as the availability and validity of satellite almanac and ephemeris data required for computing position:

  • Cold Start: In this scenario, no data is stored in the receiver. The position solution may be calculated however by performing a satellite search without the use of almanac data. To fix the first position, the ephemeris data (CED) and clock correction, together with the GNSS time reference (GST) are downloaded.
  • Warm Start: This is the case whenever the ephemeris data and clock corrections stored in the GNSS chips’ RAM are still valid, but the position and clock error are not known. In this scenario, the receiver only needs to retrieve the latest GST information from the navigation message.
  • Hot Start: If the ephemeris data and clock corrections stored in the GNSS chips’ RAM are still valid and accurate position and clock error are also known, a precise position can be computed without needing to demodulate the navigation message.

Figure: Acquisition with cold start


Figure: Acquisition with hot start

Continuous Tracking

As the name implies, GNSS solutions operating in “Continuous Tracking” mode track satellites for prolonged periods of time. After a brief boot and acquisition cycle, the GNSS chip enters its tracking sequence which consumes typically anywhere between 10 and 100 mWh of energy.

Figure: GNSS “Continuous Tracking” procedure

In the figure below, the x-axis (time) has been compressed to show the current consumption during cold start acquisition, as well as during the GNSS’ continuous tracking over many seconds. The GNSS chip may be deactivated for a certain period and rebooted in order to obtain positioning information. In such an event, the new boot triggers an acquisition cycle. The IoT Solution Optimizer allows one to model duty cycle periods of continuous operation by specifying the duration of each tracking window (time powered ON) and the average period between each booting event (time powered ON + time powered OFF).

Note: The IoT Solution Optimizer considers an acquisition with cold start if the duration between boots exceeds the validity period of the current satellite Ephemeris data. Otherwise, a hot start is modeled.

Figure: GNSS component in “Continuous Tracking” mode

Automatic Tracking

If placed into Automatic Tracking mode, the GNSS chip tracks satellites during regularly scheduled time windows, separately by periods of stand-by.

Figure: GNSS "Automatic Tracking" procedure

The operation begins with a brief boot and acquisition cycle. The IoT Solution Optimizer models the acquisition as a cold start if the duration since the last GNSS boot exceeds the validity period of the satellite Ephemeris data; otherwise, a hot start acquisition cycle is used. Whereas hot starts require between 1 to 2 mWh, a cold start acquisition cycle may consume up to 35 mWh of energy. Actual measurements from the selected component are used to precisely model the energy consumed by each cycle.

Post-acquisition, the GNSS may enter a tracking sequence consisting of stand-by periods and satellite-scanning windows. The energy consumed during tracking may vary significantly; usually, it is below 100 mWh. The stand-by windows, in turn, typically consume anywhere between 2 to 10 mWh of energy.

To configure Automatic Tracking in the IoT Solution Optimizer, please specify the average duration of the GNSS tracking events (time scanning for satellites) and their period (time scanning for satellites + in stand-by).

To conserve energy during IoT device “dormancy states”, the GNSS chip may be shut down and booted whenever positioning information is needed, especially within specific time windows. At each boot event, a new acquisition cycle with cold start is required. The IoT Solution Optimizer allows one to model such a behavior by specifying the duration of each tracking window (time powered ON, i.e. scanning for satellites + in stand-by) and the average period between each booting event (time powered ON + time powered OFF). The duty cycle is the percentage of the time powered ON over the total time between boots.

Host CPU Tracking

GNSS solutions placed into Host CPU Tracing mode operate with windows of alternating satellite-tracking and stand-by (like Automatic Tracing mode), separated by longer periods of hibernation, a GNSS power saving mode where the RF section of the GNSS chip is shut down and the and signal processing thread is suspended. Typical energy consumption in hibernation may range from 0.1 to 1 mWh. During hibernation mode, the receiver stores the current ephemeris data. The hibernation mode usually lasts less than 2 hours, thereby ensuring that the stored satellite orbit information remains valid for updating satellite positions whenever a hot start is triggered. Alternatively, suppliers may implement a procedure whereby the GNSS periodically turns ON its RF section and take a “snapshot” of the satellite constellation, shutting down its RF section thereafter, and processing the signal to obtain each visible satellite’s range.

Figure: GNSS "Host CPU Tracking" procedure

Host CPU Tracing begins with a brief boot and acquisition cycle. The IoT Solution Optimizer models an acquisition with cold start if the duration since the last GNSS boot exceeds the validity period of the current satellite Ephemeris data; otherwise, a hot start acquisition cycle is used. Whereas hot starts require between 1 to 2 mWh, a cold start acquisition cycle may consume up to 35 mWh of energy.

Post-acquisition, the GNSS may enter a tracking sequence consisting of stand-by periods and satellite-scanning windows. The energy consumed during tracking may vary significantly; usually, it is below 100 mWh. The stand-by windows, in turn, typically consume anywhere between 2 to 10 mWh of energy.

To configure Host CPU Tracing in the IoT Solution Optimizer, please specify the average duration of the GNSS tracking events (time scanning for satellites) and their period (time scanning for satellites + in stand-by). It also necessary to specify the duration of time that the device is in hibernation. An example of Host CPU Tracing is shown in the picture below. The GNSS component leaves hibernation by booting, followed by an acquisition and prolonged period in stand-by.

The GNSS chip may also be deactivated for certain periods and rebooted in order to obtain positioning information during a specific time window. At each boot event, a new acquisition cycle with cold start is required. The IoT Solution Optimizer allows one to model such a behavior by specifying the duration of each tracking window (time powered ON: scanning for satellites + in stand-by) and the average period between each booting event (time powered ON + time powered OFF). The duty cycle is the percentage of the time powered ON over the total time between boots.

Figure: GNSS component exiting hibernation

Please note that Host CPU Tracing is steered by the IoT device application. Developers must configure the exact algorithm that is used to trigger scanning windows and hibernation periods. As the figure below illustrates, these can be non-periodic in nature, highly optimized to specific use cases.

Figure: GNSS component in “Host CPU Tracing” mode

Manufacturer Default

Most manufacturers have a highly optimized “Default” mode that can be activated. In this state, the GNSS operates according to an algorithm creating windows of alternating satellite-tracking and stand-by (like Automatic Tracing mode), separated by longer periods of hibernation (similar to Host CPU Tracing mode). Unlike Host CPU Tracing, however, the IoT device application does not steer the algorithm. This reduces implementation complexities for suppliers, which can rely on a generic, best-practice algorithm.

The GNSS chip may also be deactivated for certain periods and rebooted in order to obtain positioning information during a specific time window. At each boot event, a new acquisition cycle with cold start is required. The IoT Solution Optimizer allows one to model such a behavior by specifying the duration of each tracking window (time powered ON: scanning for satellites + in stand-by) and the average period between each booting event (time powered ON + time powered OFF). The duty cycle is the percentage of the time powered ON over the total time between boots.

Assisted GPS

Standard GPS receivers (also referred to as "standalone GPS") need to download the ephemerides and the almanac information from satellites in order to calculate their own position. Retrieving this data can take approximately 30–40 seconds, as the satellites' signals are transmitted at 50 bps. Furthermore, if satellite signals are lost during the acquisition of this information, the data is discarded, and the standalone system must reinitialize the download.

In Assisted GPS (A-GPS), also referred to as "Augmented GPS," the mobile network operator deploys a cache server for GPS data called the A-GPS server. The independently download the orbital information from the satellite constellation and store it in their database. A-GPS-capable IoT device can then connect to these servers using the mobile network to download the stored content. As the data rate of these radio bearers is high, the download of orbital information can be done faster. Cell tower (eNodeB) data is furthermore used to enhance the quality and precision when satellite signal conditions are inadequate. Using this system however can incur additional data charges for the IoT device's service, depending on the tariff.

There are two types of Assisted GPS available:

  • Mobile Station Based (MSB): In this system, the A-GPS server suppliers ephemerides and almanac information to the requesting IoT devices, enabling their GPS receiver to track satellites and calculate its position much faster. The mobile operator network provides the precise time.
  • Mobile Station Assisted (MSA): Alternatively, the A-GPS server may calculate the position of the IoT device. A snapshot of the GPS signal with approximate time is sent to the server, who then processes this information to determine the device's position. The GPS receiver then receives it coordinates from the A-GPS server directly.
Cloud Location over Cellular Introducing Cloud Location over Cellular

Polte's patented Cloud Location over Cellular (C-LoC) technology has been proven to be among one of the most accurate cellular positioning methods available, offering 10-20x performance improvement over traditional location technologies for 4G and 5G devices globally.

In the past, connected devices have required an amalgamation of technologies to provide highly accurate, universal positioning for a wide range of use cases indoors and outdoors. This drives up cost, drains batteries, compromises security, and calls for complex deployments, making it more difficult for consumers and businesses around the world to benefit from valuable location insights.​

Via the Polte Location API, Polte provides seamless indoor and outdoor location continuity simply leveraging cloud computing and existing cellular infrastructure. This approach enables IoT solution developers, systems integrators, and solution providers to eliminate the need for incorporating additional infrastructure, hardware, or radios, reducing the cost and power drain of asset trackers and other cellular connected devices. Polte uses a unique, secure "edge-to-cloud" architecture that offloads calculations to the cloud rather than leaving location information vulnerable in the device itself.

 By unlocking more accessible, ubiquitous visibility for assets across verticals from transportation and logistics to Industry 4.0, Polte can both displace the need for GPS for wide area tracking and create a new set of use cases in tracking greater quantities of smaller, lesser value assets.

For those use cases where Polte alone is not an ideal fit, due to the environment or location accuracy requirements, Polte can boost a solution by hybridizing with another location technology. Polte augments these hybrid solutions by offering better coverage, lower cost, longer battery life, and higher security than if using other location technologies like GPS, Wi-Fi, or BLE alone.

The Polte Location API currently supports the delivery of location for 4G and 5G devices via a standard, secure REST-based interface. There are two options of service available depending on selected device capabilities: Polte CoreRes (CR) and Polte SuperRes (SR).

For more information, please visit: https://www.polte.com/, or contact Polte with your questions here.

Polte SuperRes (SR)

Polte SuperRes (SR) delivers an exceptionally high level of cellular location accuracy and is derived when Polte firmware is embedded directly into an IoT device’s cellular chipset. To receive this highest accuracy, customers may use these "Powered by Polte" SR chipsets, modules, devices, and end-to-end solutions provided by our ecosystem partners. SR currently supports LTE-M devices, but will evolve to support Cat-1+ devices in 2021-2022.

For more information, please visit: https://www.polte.com/, or contact Polte with your questions here.

Polte CoreRes (CR)

If you need to locate in areas that have not yet been optimized, have existing devices that do not yet have Polte firmware enabled in the chipset, or are operating on LTE Cat.1+ networks, Polte CoreRes (CR) is ready for you now. CR supports LTE-M, NB-IoT and LTE Cat.1+ devices. With Polte CR, Polte can be leveraged by any Mobile IoT module simply through the use of standard AT commands and API integration for location accuracy greater than that of Cell ID or Enhanced Cell ID. You may try CR (Beta) here.

Though Polte SR delivers an exceptionally high level of cellular location accuracy, both Polte CR and Polte SR can offer better accuracy than both Cell ID and Enhanced Cell ID. Polte CR accuracy offers significantly better accuracy than Cell ID (often 2-3x better), and both are available as global solutions depending on the networks available.

For more information, please visit: https://www.polte.com/, or contact Polte with your questions here.

Hybrid Positioning Technology

Location with Wi-Fi, Cell, and GNSS

Device makers, chipset and module manufacturers, application and platform developers, and network operators looking to provide accurate location have traditionally been burdened by battery drain, technical integration challenges, and sub-par accuracy results.

With Skyhook’s Hybrid Positioning Technology, IoT platforms, marketplaces, and connected devices of any shape and size can benefit from a solution that uses Wi-Fi, Cell Signals, and GNSS to provide location in any environment around the world. Hybrid Positioning Technology uses signals from any of these methods alone, or intelligently combines them with each other to provide the highest degree of accuracy, with minimal battery usage.

Skyhook is the leading independent provider of location technology. With over 18 years of experience, Skyhook has leveraged a global network of over 5 billion Wi-Fi access points and over 200 million cell towers to provide clients across all IoT industries with a solution to reliably track their most valuable assets, optimize management of fleets, resolve emergency location, develop applications with incredible user experiences, and so much more.

Today, Skyhook partners with the world’s leading chipset manufacturers, device makers, application and operating system developers, and network operators to enable location-ready devices and solutions using one of several flexible integration options:

  • SDK – The complete suite of location technology available in libraries for nearly every operating system. Includes features like offline positioning, location smoothing, incredibly fast time-to-fix, scan collapsing and caching for power optimization and more.
  • Lite Client – All of the core functionality found in the SDKs, in a more compact open-source library. This enables device prototyping and a seamless integration with custom and proprietary operating systems.
  • Embedded Client – Great for asset tags, wearable devices, and many other IoT applications, this extremely lightweight solution uses a binary protocol for lightweight data transmission, without compromising accuracy.
  • Cloud APIs – RESTful JSON and XML interfaces for platform and other server-to-server integrations.
  • SIM Positioning Applet – Network operators and IoT solution providers that include SIM connectivity solutions can position devices even when roaming out of the core network coverage area.

Skyhook understands that each use case is unique. That’s why our team takes the time to understand individual needs and partners with customers to find the best integration option, build out custom features, and provide always-on support throughout the entire relationship. To learn more about Skyhook technology, please visit https://www.skyhook.com/.

Voltage Regulation Low-dropout Regulators

Low-dropout Regulators (LDO) are linear voltage regulators that support a small (= low) voltage drop between the input voltage (Vi) and output voltage (Vo), generally below 500mV. Unlike standard linear voltage regulators, which require typically a 2-Volt differential voltage between input and output to work properly, LDOs work well even when the dfference between the two is very small, being able to regulate down to less than 100mV. That said, the ability to reject noise and ripple from the input supply significantly reduces below ca. 500mV, which may become an issue with many LDOs. A key benefit of using an LDO is that it protects more expensive, connected loads from voltage transients, current surges, reverse voltage, power supply noise, etc.

For most applications, using an LDO regulator makes sense if the input voltage is no more than a few volts above the output voltage; otherwise, the regulator will consume too much power, and a more efficient switching regulator (DC/DC Converter, for instance) could become a better implementation option. In the figure below, dropout voltage (Vdrop) is shown as the voltage differential between the input voltage (Vi) and output voltage (Vo). When no load is applied and the circuit is resting, a minute quiescent current (Iq) is drawn.

Figure: Low-dropout Regulator Circuit

Voltage Regulators

Electronic circuits can maintain a steady voltage by employing linear regulators in their circuitry. Two forms of voltage regulation are available: linear regulators and switching regulators.

A linear regulator maintains its constant output voltage in consideration of both the input voltage and its load, by acting like a variable resistor, varying its internal resistance and continually dissipating the difference between the input and regulated voltages in the form of heat. Single-chip regulators Integrated Circuits (ICs) are quite common and can be used in IoT devices to manage the voltage requirements of the circuitry.

Linear regulators typically have a maximum rated output current. This is generally limited by either its ability to dissipate power, or by the output transistor’s capacity to carry current. Furthermore, linear regulators can be implemented in two configurations:

  • Shunt regulators place the regulating device in parallel with the load
  • Series regulators place the regulating device between the source and the regulated load

Low-dropout regulators (LDO) are commonly found linear regulators implemented in IoT devices. Advantages of an LDO include low noise, a simple circuit configuration, and few external parts. Disadvantages are its relatively poor efficiency, potential heat generation, and ability to only step-down (buck) voltage.

Switching regulators work on a different priniciple. They switch an active device ON and OFF to maintain an average value of output over time. Because the input voltage to the linear regulator is higher than the regulated voltage, there is a drop-out voltage (Vdrop) and the efficiency is limited.

A high-frequency switch with varying duty cycle maintains the output voltage. All voltage variations caused by the switching mechanism are then filtered out with an LC filter. Common examples of switched mode voltage regulators are DC/DC converters in power supplies. Advantages of DC/DC converters are their high efficiency, low heat generation, and ability to boost and/or buck voltage, and even provide negative voltage operation. Unfortunately, they may require more external parts and have a complicated design which increases noise in the circuit.

Quiescent Currents

Quiescent current (Iq) in a voltage regulation Integrated Circuit (IC) is the current drawn internally within the regulator, and not available to the load. It is thus relevant in either a no-load or "non-switching," but "enabled" condition. “Nonswitching” implies that no power switch in the IC is ON (closed), resulting in a high-impedance condition where the power stage is completely disconnected from the output. “Enabled” refers to the fact that the IC is turned ON via its EN pin and is not in a UVLO (or other deactivated) state.

Iq is measured as the input current during such conditions, and is therefore often referred to as "resting current." A commonly used name for the ICs condition under which quiescent current can be measured is its "quiescent state."

Quiescent currents constitute a source of inefficiency in linear regulators intended for use in battery-powered devices. As many Mobile IoT devices may be for prolonged periods in Power Saving Mode, quiescent currents may become a major concern in terms of the battery life. The LDO may even actually be more effiient whenever a very low-current load is applied! In the figure below, quiescent current is the leaked current Iq detected when the circuit is resting, i.e. when no load is applied. The Iq travels inside of the IC to ground.

Figure: Iq in an LDO Circuit

DC/DC Converters

A commonly-used type of switching voltage regulator is the DC/DC converter (often written as "DC-DC" or "DC-to-DC" converter). These circuits for power conversion use high-frequency switching and inductors, transformers, and capacitors to filter out switching noise, producing a smooth, regulated DC voltage. Closed feedback loops maintain constant voltage output even input voltages and output currents are changed.

With their 90% efficiency, they may outperform their linear regulators cousins in specific scenarios. Among the disadvantages of the DC/DC converter are its noise and complexity. Isolated varieties of DC/DC converters have their input ground connected to the output ground. There are four topologies available:

  • Buck converters step the voltage down, such that output voltage, Vo < input voltage, Vi
  • Boost converters step the voltage up, such that output voltage, Vo > input voltage, Vi
  • Buck-Boost and SEPIC converters step the voltage up or down, such that output voltage, Vo ≥ OR ≤ input voltage, Vi
  • Negative Voltage, or Inverting Converter, supplies negative voltage

Figure: DC/DC Converter Circuit

One of the most important parameters to consider when selecting a DC/DC converter is its efficiency, the fraction of input power which reaches the load, defined as the fraction: (Vout • Iout) / (Vin • Iin). Some DC/DC converters are more than 90% efficient. The source providing power to the DC/DC converter must be able to provide enough power to account for the converter's inefficiency. Efficiency is typically specified by manufacturers using curves on the product specification, with peak efficiency achieved under a certain load current. At lower power outputs, where the amount of power required to power the circuit is similar to the load power, the DC/DC converter's efficiency is usually the lowest.

Dropout Voltage

Dropout voltage (Vdrop) is defined as the minimum input voltage-to-output voltage differential at which the voltage regulator can no longer maintain regulation and supply the specified current. Any further reduction in the input voltage may result in a reduced output voltage. This value is highly-dependent on the load current and junction temperature. In the figure below, the dropout voltage is the the measured potential difference between input voltage (Vi) and output voltage (Vo) with a load applied.

Figure: Vdrop in an LDO Circuit

SIM Cards nuSIM (Integrated SIM)

The integrated SIM card referred to as nuSIM has been developed especially for Internet of Things applications where the cost factor, battery efficiency, and simplicity along the value chain are decisive. This makes a perfect sense for Mobile IoT technologies, such as NB-IoT or LTE-M, among others. In the IoT Solution Optimizer, you can use the Hardware, Module and Chipset Selection Guides to quickly identify which IoT devices and components have built-in connectivity using nuSIM technology.

Figure: nuSIM Logo

Typical Massive IoT applications such as smart parking, networked street lamps, and garbage cans, or environmental sensors in buildings and cities, use devices on a massive scale. As a deployment of large volumes of hardware may be associated with significant cost, nuSIM was created to reduce this factor, thus better suiting these cost-sensitive IoT applications. The nuSIM dispenses therefore with the functions found in eSIM that are superfluous for many IoT scenarios which result in an increase to the cost per SIM. Features such OTA profile download, multiple profile support, profile switching, etc. are not supported. A smart meter, such as a networked water or electricity meter, for example, might send a tiny data packet into the network once a day and would not require such functions like voice or SMS. Also, nuSIM profiles are much smaller, below 500 Bytes. This all makes the nuSIM extremely slim and cost-effective. In addition, it requires less energy than a conventional eSIM, thus extending the battery life of IoT devices.

The nuSIM is designed as part of a chipset and is therefore less sensitive to shocks or large temperature fluctuations than the SIM in a card slot. That’s an advantage when used in Industry 4.0, for example, in a factory or on a construction site. The absence of a SIM card slot also makes the closed design of a device possible, protected from moisture and dust. The nuSIM thus achieves a service life of at least 10 years, in other words it usually lasts as long as the component itself. In addition, it’s almost impossible to access a nuSIM soldered into a device in order to manipulate it. The security level corresponds to that of a changeable SIM. The login information stored in encrypted form on the SIM card enables secure and private access to mobile networks and guarantees the integrity of billing – which is particularly important for roaming, for example when a truck with a tracking module is driving across Europe. The operator profile is programmed by module and device manufacturers directly onto chip during the production process. As a result, the nuSIM offers the end user out-of-the box mobile Internet access.

Figure: nuSIM vs. eSIM

Figure: nuSIM and eSIM Address Different Segments

Industrial SIM Cards

Standard SIM cards are suitable for stationary objects in normal circumstances when it comes to temperature or humidity, for example. Industrial SIM cards are recommended for moving objects, IoT applications exposed to vibration or wear and tear, smart metering applications, as well as applications implemented in more extreme conditions.

Although, a standard SIM card may have a lifetime of more than 10 years, the default lifetime is actually estimated to be between 2-5 years, whereas the default lifetime of an industrial SIM card may reach between 5-10 years. These calculations are based on the expectation that there will be frequent SIM activity. This is not necessarily the case for LPWA / Mobile IoT applications.

Figure: Standard SIM vs. Industrial SIM

Embedded SIM

Embedded SIM (or electronic SIM, eSIM) is a SIM form factor compatible with the GSMA remote provisioning specifications. These eSIM/eUICC specifications allow enterprises to change and activate their SIM profile embedded in IoT devices remotely or over the air (OTA).

There are currently two GSMA remote provisioning specifications (M2M eSIM and Consumer eSIM), both of which simplify the logistics process for device distribution. The consumer implementation is intended for devices such as smartphones, smartwatches, tablet computers, and fitness bands, where the user triggers the SIM profile download or swap. The M2M sepecification, in turn, serves all other IoT devices in B2B and B2B2C markets, on devices without human interaction or even user interface. In M2M eSIM follows a server-driven model, such that the operator- or service provider-owned remote provisioning server triggers the profile download and management procedures.

M2M applications require the network to control and automate decision making around IoT connectivity and the GSMA’s "Remote Provisioning Architecture for Embedded UICC" (eUICC) can be used for this. The three key elements for remote provisioning in this architecture are:

  • Embedded Universal Integrated Circuit Card (eUICC): A secure element electronic component of the SIM card that is fixed on a device and stores one or more subscription profiles. It is compatible with eSIM and removable SIMs. In effect, each profile enables the eUICC to function like a removable SIM.
  • SM-DP (Subscription Manager – Data Preparation): Responsible for preparing, storing and protecting operator profiles and for downloading and installing profiles onto the eUICC.
  • SM-SR (SM-SR (Subscription Manager – Secure Routing): Responsible for managing (enabling, disabling, deleting) profiles on eUICC and securing the communication link between eUICC and SM-DP for the delivery of operator profiles.

An connectivity management platform with service support for eUICC technology enables global IoT deployments at scale using a single factory-installed eSIM or SIM (a single stock keeping unit, or SKU). An connectivity management platform allows an eSIM or SIM to localize once an IoT device is deployed, anywhere in the world. eSIM-compatible eUICC technology simplifies manufacturing, logistics and deployment, keeps costs down and enables secure and resilient global scalability for IoT.

Subscriber Identity Module (SIM)

Subscriber Identity Modules (or Subscriber Identification Modules), universally referred to as "SIMs" or "SIM cards", are IC circuits that securely store the 3GPPTM subscriber's International Mobile Subscriber Identity (IMSI) number and the related key. These parameters are used to identify and authenticate on cellular devices (for example, mobile phones, computers, tablets, IoT devices, wearables, etc.), as well as to store contact information and parameters used for auxiliary services. SIMs are integrated into Universal Integrated Circuit Card (UICC) physical smart cards, which are normally made of PVC with embedded contacts and semiconductors. As defined in the stardards, SIMs enable the portability of user identity between different cellular devices.

Initial UICC smart cards were initially the size of credit and bank identification cards. Subsequently, sizes have dramatically reduced over the years, introducing ever-shrinking form factors where the electrical contacts where kept the same. The component has also been introduced as a soldered-chip integrated directly into the circuitry of specific consumer and industrial devices, known as "eSIM" or Embedded SIM. This enables more reliable, secure, and cost-efficient IoT solutions.

Diagram: SIM Form Factors

A SIM card contains a unique serial number (ICCID), international mobile subscriber identity (IMSI) number, security authentication and ciphering information, temporary information related to the local network, a list of the services the user has access to, and two passwords: a personal identification number (PIN) for ordinary use, and a personal unblocking key (PUK) for PIN unlocking. In Europe, the serial SIM number (SSN) is also often accompanied by an international article number (IAN) or a European article number (EAN), which is required when registering online for prepaid card subscriptions. Industrial SIM cards are also specially made to handle more robust or longer-term operations. Whereas historically 5 volt SIMs where used, the operating voltages for modem SIM cards is either 3 V or 1.8 V.

Decades ago, the SIM consisted of both the hardware and software components. With the advent of the 3G standard (UMTS) a tiered approach was taken. The SIM was henceforth an software application, whereas the hardware became the Universal Integrated Circuit Card (UICC). UMTS introduced a new application, the Universal Subscriber Identity Module (USIM) which brought, among other things, security improvements such as mutual authentication and longer encryption keys, with an improved address book.

Power Supply Lithium-Thionyl Chloride (Li-SOCl2)

Lithium Thionyl Chloride (Li-SOCl2) technology has been created and developed in the mid-1960s, primarily for military devices (radios). Its processability and performances repeatability have been improved since that time, so it can be considered a mature technology.

Chemical Reaction: 4 Li + 2 SOCl2 → 4 LiCl + SO2 + S

Lithium Thionyl Chloride primary (non-rechargeable) electrochemistry offers the best choice for long duration applications, since it combines high energy density, wide operating temperature range (from -60°C up to +85°C), low self-discharge (from less than 1% up to 3% in storage at 20°C). The technology has been widely adopted for powering electronic devices, particularly communicating devices, thanks to its high operating voltage (3.6 V vs 1.5 V for alkaline systems), which remains very stable during the discharge, thus the battery use.

Lithium Thionyl Chloride cells exist with two different construction types: bobbin and spiral designs. Whilst the bobbin design is suited for low drain currents, limited pulses and several years lifetime, spiral designs are ideal for powering medium- to high-pulse applications, such as IoT devices with Low Power Wide Area (LPWA) communications.

Figure: Lithium Thionyl Chloride Battery

Lithium Manganese Oxide (Li-MnO2)

Lithium Manganese Dioxide technology Li-MnO2 technology is mature, thanks to its more than 30 year-history of deployment. Ithas been widely used both in military and consumer applications such as cameras. On the market, most Lithium Manganese Dioxide cells have a spiral design, but prismatic and pouch form factors also exist. In its spiral form, which allows more electrodes surface, Li-MnO2 technology is compatible with high continuous or pulsed currents consumption profiles. Compliant to a wide range of temperatures (from -40°C up to +80°C for some cells), the Lithium Manganese Dioxide technology differentiates by its absence of significant passivation effect, which reduces greatly the voltage drop that may occur during pulsed discharges with other primary cells technologies.

This chemistry is already widely adopted for high power applications, but its lower nominal voltage (3 V against 3.6 V for Li-SOCl2) had always presented a barrier as it was close to the cut-off voltage (normally 2.5 V to 2.8 V) for IoT devices electronics. This situation has changed with the recent introduction of low-consumption electronic components. Lithium Manganese dioxide cells can be successfully selected for IoT devices communicating with Low Power Wide Area (LPWA) technologies, if the cut-off voltage of the components and the operating temperature range are compatible with the operating voltage of this technology.

Batteries

A battery is a system which stores chemical energy and converts it into electrical energy thanks to an electro-chemical reaction. It serves as the primary source of energy for powering an IoT device. There are two fundamental types of batteries, each of which can be further divided into sub-groups based on their chemistry:

  • Primary Batteries, which are not rechargeable
  • Secondary Batteries, which can be recharged and reused several times

Batteries are often referred to as "electrochemical cells." When one connects the battery to an external circuit, a reduction–oxidation reaction is triggered, releasing energy in the form of an electrical current. When a battery supplies electrical power, its positive pole is called the "cathode" (an electron taker / oxidizing agent) and its negative pole is referred to as the "anode" (an electron provider / reducing agent). The stronger the oxidation or reduction power of the chemistry used, the greater the difference of potential, or voltage, between both poles of the battery. In this sense, the cell supplies current to the IoT device by transferring electrons from the negative pole over an external electrical circuit to its positive pole. Ions are transferred within the battery from anode to cathode through a porous separator, which is inserted between both electrodes. This internal component mainly acts as an electronic insulator. The entire system is furthermore immersed in an ionically conductive electrolyte transporting ions formed at the anode to the cathode side, within a sealed can.

Figure: Function of Battery in Electrical Circuit

Battery chemistries are selected based on the electric potential between the different elements or compounds therein. As seen in the figure below, the voltage between cathode and anode can vary extensively.

Figure: Electric Potential of Different Battery Chemistries

Primary Batteries:

These batteries are intended for single use (or "disposable") and cannot be re-charged. During their discharge, the electron provider (the anode) is irreversibly consumed. The most common example of primary batteries are alkaline type; however, in Mobile IoT, where one usually tries to achieve very long battery lifetimes and reduce maintenance cycles for IoT applications, it is recommended to use primary Lithium chemistries, e.g.:

  • Lithium-Thionyl chloride (Li-SOCL2), where Lithium is the anode and Thionyl chloride is the cathode
  • Lithium-Manganese dioxide (Li-MnO2), where Lithium is the anode and Manganese dioxide is the cathode

Depending on the chemistry used, one can leverage different performance characteristics, such as the cell's nominal voltage. In addition to the battery chemistry, one should also consider the different internal construction of the cell:

  • Bobbin construction: This is the classic construction type showing high capacity and energy density. Bobbin type batteries are optimally used during several years with low currents (µA to a few mA) and limited pulse currents (from 5 to 100 mA).
  • Spiral construction: This architecture brings more electrodes surface, leading to higher current capability, thus ideally used for power applications.

Secondary Batteries:

These batteries can be re-charged during their lifetime, restoring the original, depleted composition by applying a reverse electric current. For small devices, the most commonly employed battery type is the lithium-ion battery; however, there are also other types of rechargeable batteries such as the lead acid, which is used in vehicles, for example.

In battery-powered Mobile IoT devices, there are several challenges to overcome especially if one targets to employ primary batteries in a product that should have a long battery life. On the one hand, the Mobile IoTwireless communication chipset or module, as well as other hardware components, usually draw bursts of current for a short time, also called pulse currents. That said, such pulses are typically up to several hundreds of milliamperes. A battery needs to be dimensioned to deliver such currents also in potentially extreme temperature conditions (very cold or hot environments). On the other hand, one has to consider the battery self-discharge and the capacity loss during storage. For a battery lifetime estimation, several other factors should be considered (e.g. cut-off voltages, cell efficiency, leakage currents, temperature effects etc.).

A precise and reliable modeling of battery lifetime is complex and can be best performed by the battery suppliers. In the IoT Solution Optimizer, a basic battery modeling system is implemented, taking several aspects into account. That said, for a more precise calculation considering all of the aspects of your product's life, such as manufacturing, storage, usage and deployment environment, it is recommended to contact your battery supplier.

Peak Current

For many battery-powered IoT applications, peak currents are an important aspect of the design. Particularly in Mobile IoT designs, the current peaks of the wireless communication chipset or module (the modem) should not be underestimated. The IoT Solution Optimizer considers the peak currents of the selected Mobile IoT wireless modem to help users in selecting the best-fit battery from a list of integrated products its product shelf.

In addition the modem’s behavior, microcontrollers (MCU), linear converters, sensors and/or actuators may need peak currents as well, which often happen simultaneously with those of the Mobile IoT wireless modem. For this reason, the IoT Solution Optimizer combines the peak currents of these remaining hardware components to those of the selected Mobile IoT wireless modem. Your project’s composite value can therefore be compared to the integrated batteries’ ratings when using the Battery Selection Guide. If you are not sure what value to set for this field, please set to 0 mA; in this case, the peak currents of the remaining hardware components will not be considered at all.

Generically when reading battery data sheets, it is important to understand that one cannot simply compare the peak currents of an Mobile IoT modem and other hardware components with the peak current (also referred to as the “pulse capability”) of a given battery. The maximum peak currents for batteries are determined for very specific test conditions. For example, maximum peak currents are often specified for a certain maximum time duration (e.g. 100ms), with a certain frequency (e.g. every 2 minutes), and at a certain temperature (usually room temperature, e.g. 20°C). Especially in the case of Mobile IoT wireless modems it is very unlikely that such conditions are met; therefore, the IoT application’s peak currents should not be directly compared with the battery peak currents taken from the product datasheets. Moreover, consider the significant impacts of aging and temperature on the battery’s pulse capabilities. As such, it is recommended to contact your battery supplier for further questions and accurate lifetime calculation.

Nominal Voltage

The nominal voltage of a cell or battery represents its rated value in normal operating conditions. In the datasheet of a selected cell, precise information regarding operating conditions corresponding to nominal voltage ratings is given. For example, a nominal voltage can be rated at +20°C and 2mA continuous current. The majority of Mobile IoT applications are however very different and usually cannot be replaced by a constant load model. Apart from nominal voltage, many suppliers may instead indicate the “open circuit voltage,” rated at a given temperature or temperature range. Cell datasheets typically show the graph of voltage versus current at different temperatures; however, for a more accurate model of the relation between temperature and voltage, it is usually necessary to contact your battery supplier.

If configured in the Battery Selection Guide, the IoT Solution Optimizer can compare the cut-off voltage of components in the IoT device, the IoT device’s specific peak current and the temperature extremes of the operating environment against the battery’s voltage ratings at peak current.

Nominal Capacity

The nominal battery capacity is the rated capacity value in Ampere-hours (Ah) measured in defined operating conditions such as discharge rate, environment temperature and cut-off voltage. The capacity is calculated by multiplying the discharge current times the time until the defined cut-off voltage threshold is met. Likewise, a cell or battery capacity can be represented in terms of Watt-hours (Wh), calculated by multiplying the discharge current times the time until the defined cut-off voltage threshold, times the nominal voltage of the battery.

For everyday IoT applications the available battery capacity varies based on real life conditions. There are many parameters affecting the available capacity such as temperature, peak currents and consumption profile, minimal application voltage. Therefore, an accurate calculation of the expected battery capacity for your IoT application can be quite complex, as it must consider both intrinsic properties of the battery cell and typical parameters of use cases and environmental conditions.

Batteries additionally have a certain self-discharge rate which is an important factor for battery-powered devices operating for several years on the same battery. Within the IoT Solution Optimizer, key aspects are considered for the nominal capacity of a given battery; however, due to the complexity of modeling, we are limited by the available capacity for a given average temperature range which is always considered at 20°C (room temperature), unless specified otherwise.

Please find additional information on the available capacity under specific conditions in the datasheet of your selected battery or cell. For a more accurate calculation, please contact your battery supplier.

Self-discharge

Battery cell self-discharge is an important factor to consider for IoT applications which must operate several years with a given battery. One should distinguish between the following two self-discharge phenomena:

  • Self-discharge in storage (e.g. the storage period of a battery could be significant, from the manufacturing date of the battery, then the lead time until its integration into the IoT device, its delivery and storage up to the beginning of deployment of the device, and finally the normal operation of the IoT application. This specific storage period is not considered in the IoT Solution Optimizer’s calculations, thus the associated self-discharge is not considered.
  • Self-discharge in use, while the IoT device is in normal operating mode.

Please note that the self-discharge under typical operating conditions can be very complex to model and depends on several parameters, such as peak currents and consumption profile, temperature, cell's age, etc. In the IoT Solution optimizer, a fixed percentage of yearly self-discharge is considered. This provides a good indication of the expected performance, but should be studied deeper for careful design of a commercial product. In fact, the actual self-discharge will depend on both intrinsic characteristics of the battery technology and real-life conditions. It is highly recommended to contact your battery supplier for a more detailed modeling and consulting.

Spiral Cell Design

There are different ways to design and build battery cells, each of which has a direct impact on the performance of the cell. Proprietary production techniques are used in conjunction to further enhance aspects of the design. One of the oldest ways to construct a battery is to use the prismatic “Flat Plate” architecture often found in lead-acid batteries or nickel-based electronchemistries. Typically, primary cells employed in the IoT Industry, often produced in a hermetically-sealed cylindrical casing. For these, we can basically distinguish two main construction types:

  • Bobbin Architecture
  • Spiral Architecture

Figure: Spiral Battery Construction

Source: SAFT Battery Training

Spiral architectures usually leverage a construction consisting of anode and cathode sheets with a separation layer in between, rolled and fitted into a can with electrolyte. As the need for higher currents in IoT devices increases with Mobile IoT communication networks – take a Power Class 3 (23dBm) LPWA device – battery technologies need to deliver the necessary peak currents during active time. A spiral construction offers significantly more surface area between the electrodes which reduces the internal resistance and increases the current capability, while showing a lower energy density compared to bobbin cells. This greater contact surface between electrodes, however, may lead to an increased self-discharge.

Bobbin Cell Design

There are different ways to design and build battery cells, each of which has a direct impact on the performance of the cell. Proprietary production techniques are used in conjunction to further enhance aspects of the design. One of the oldest ways to construct a battery is to use the prismatic “Flat Plate” architecture often found in lead-acid batteries or nickel-based electrochemistries. Typically, primary cells employed in the IoT industry, batteries are often produced in a sealed cylindrical casing. For these, we can basically distinguish two main construction types:

  • Bobbin Architecture
  • Spiral Architecture

Figure: Bobbin Battery Construction

Source: SAFT Battery Training

Bobbin architectures consist of a straightforward cylindrical construction in form of a can with an electrode pole through the center, electrically isolated from the can and connected to the positive battery terminal (cathode). A Lithium-metal layer on the can itself forms the negative battery terminal (anode). The design is then completed by a liquid electrolyte filling available volume inside the can.

Bobbin cells provide higher energy density and lower self-discharge than spirally-designed cells, because of the relatively smaller contact surface between the electrodes. That said, the drawback is in the cell limited current and pulse current capability, which is often required in Mobile IoT applications (23dBm transmit power in Power Class 3 devices). Thus, these cells might be used together in parallel with a pulse sustaining device, such as a capacitor, EDLC or Hybrid Layer Capacitor, to achieve higher pulse currents profiles.

Cut-off Voltage

The cut-off voltage of an IoT device is the minimum voltage required by its hardware to operate:

  • This cut-off voltage is often given by components, such as the Mobile IoT wireless communication chipset or module, which have a specified voltage range. Please check the datasheets of your components for further details.
  • Battery-powered devices may use a low-dropout (LDO) regulator (a DC linear voltage regulator that regulates output voltage). In such cases, the cut-off voltage is an important parameter to consider for proper device operation. Please consider here the LDO’s dropout voltage. If the voltage drops below this threshold in operation, it may cause unpredictable behavior or could cause complete shutdown of the application.

Apart from the hardware requirements, it is important to understand that the minimum voltage of a battery depends on many factors such as temperature, peak currents, aging effects, etc.

In the IoT Solution Optimizer we give a general guidance and batteries can be further analyzed for your use case. That said, for a commercial rollout of your product, it is necessary to do mmore accurate calculation considering aspects of both environmental conditions and hardware requirements.

Electronics Type

You can specify the voltage range of your hardware to help select the proper battery from a list of pre-integrated solutions. Apart from the nominal voltage, many IoT applications require a well-defined voltage range for safe operations. This includes:

  • A minimal voltage supply
  • A maximum voltage supply

Especially for battery-powered devices, the supply voltage may not always be stable depending on different boundary conditions such as battery chemistry, peak currents, operating temperature, aging effects, etc. The minimal voltage of your IoT device is the minimum voltage threshold which is required by your hardware to operate correctly. Often, this cut-off voltage is determined by specific components such as the wireless communication chipset or module. These may have a certain specified operating voltage range (e.g. min 3.1V to max 4.0V). Please check the datasheets of your components for further details.

Average Current Consumption

For a more precise calculation of the overall power consumption of your product it is necessary to indicate the average current consumption between data transaction events for non-modem hardware components. This ensures that the IoT Solution Optimizer considers the entire power consumption of the hardware that is caused outside of the Mobile IoT wireless communication chipset or module (modem). If not sure of the value, it is possible to leave the field with a value of zero; this aspect is thereafter not considered in the analysis results.

Please note that the average current consumption includes not only the consumption of the microcontroller, sensors and/or actuators, but also any other components in your IoT application, as well as any leakage currents which may occur throughout the operating lifetime.

Please note that modeling is performed only at one specific temperature in the IoT Solution Optimizer. In reality, temperature variations may have a significant impact and would need to be considered as well. There are two possibilities to specify the average current power consumption:

  1. Basic Configuration: This specifies the average current of the hardware (except of the Mobile IoT wireless communication chipset or module) between two data transaction events.
  2. Advanced Configuration: This option distinguishes between an "active time" and "sleep time." Active time is usually a higher power consumption likely caused during a data transaction event, when the microcontroller and other hardware elements are active. Sleep time is defined as the average remaining time between any two data transaction events, where usually the microcontroller and other hardware elements are primarily dormant, and may wake-up periodically. The average current in this mode is expected to be much smaller as Mobile IoT applications try to save as much power as possible during this time.
Hardware Power

The IoT Solution Optimizer calculates power consumption for Mobile IoT wireless communication chipset or module based on your use case, configured 3GPPTM power saving features, your application payload and protocol settings, data transmission frequency, etc. That said, it is important to also specify the additional hardware power consumption of additional hardware components in the IoT device besides this modem. For this purpose, it is possible to specify an additional power consumption so that it is considered in our calculations.

Additional hardware power consumption can be caused by a number of components, for example:

  • Antennas
  • Microcontrollers
  • Sensors
  • Actuators
  • GNSS solutions (GPS, GLONASS, BaiDou, Galileo, QZSS)
  • DC/DC converters
  • LDO regulators
  • Quiescent current
  • Memories
  • RTCs

In future releases of the IoT Solution Optimizer, we will be integrating additional features to model some of these aspects in more detail.

Battery Supplier

It is important to select a trustworthy battery supplier who can provide the necessary technical consultancy for your product design. The lithium battery-modeling features within the IoT Solution Optimizer have been developed in close cooperation with Saft S.A., one of the world's foremost battery suppliers.

Saft in brief:

For nearly 100 years, Saft’s longer-lasting batteries and systems have provided critical safety applications, back-up power and propulsion for our customers. Their innovative, safe and reliable technology deliver high performance in space, at sea, in the air and on land.

Figure: Saft in Figures

Saft brief

Saft is the producer of choice for some of the world's most demanding customers, and its batteries, systems and solutions make a difference across a broad range of market sectors. Saft is a wholly-owned subsidiary of Total S.A.

For further information please visit us at www.saftbatteries.com.

Battery Configurations

Battery pack configurations allow you to connect multiple batteries in serial (S) and/or parallel (P). Depending on the configuration, a battery-protection circuit may be required.

Figure: Battery pack with two batteries in serial (2S)

Cells in Series:

  • The voltage of both batteries is added
  • The capacity and current delivery stays the same
  • An alternative to a single larger cell, where the larger cell would not fit the mechanical constraints of the design or where a sufficiently large single cell is not available
  • An alternative to a boost converter, which is typically less efficient than a drop converter

Figure: Series Configuration

Cells in Parallel:

  • The voltage stays the same
  • The capacity and current delivery are added
  • When cells are connected in parallel, battery protection circuitry may become necessary to avoid cross-charging

Figure: Parallel Configuration

Cells in Series and in Parallel:

  • The voltage of both batteries is added
  • The capacity and current delivery are added

Figure: Series and Parallel Configuration

3GPP Connectivity Power Saving Features Rel.13 Release Assistance Indication

The 3GPP™ Release 13 Early Release Assistance Indication feature (TS24.301) helps IoT applications further reduce device power consumption and improves control plane latency. This is achieved by allowing the IoT device to prematurely tear down the Layer 3 Radio Resource Control (RRC) bearer between itself and the eNodeB on the mobile network operator's radio access network. This is done by including a Release Assistance Indicator IE when sending data to inform network if no subsequent uplink or downlink data transmission (e.g. an acknowledgement or response from the application server) is expected. Without this feature, the IoT device is forced to remain in RRC_CONNECTED mode until the expiration of the eNodeB's RRC Activity Timer, which is typically 20-30 seconds.

Figure: Normal Transition from Connected Mode to Idle Mode

By activating the Early Release Indication, the IoT device is able to go straight into the Idle Mode after data transmission and/or reception. Depending on the chipset solution being used, this means that up to 50mA of current may be saved by the IoT device. This quickly adds up over time to form a significant component of reduced battery lifetimes. In order to support application developers with usage of the Early Release Indication, the IoT Solution Optimizer only presents this feature if the selected chipset or module supports it. Furthermore, the AT Command Guide in the corresponding project folder under "MyProjects" indicates how to activate and use this feature.

Figure: Activation of Early Release Assistance Indicator

As a word of caution, Early Release Assistance should only be triggered by IoT application when no additional uplink or downlink traffic is expected in the near-term. Any premature release of the RRC bearer would mean that the IoT device would need to waste additional power to transmit the Random Access Channel and reestablish RRC bearer! This generally costs more power than simply remaining in the RRC_CONNECTED for the duration of the RRC Activity Timer. Please ensure that you trigger this feature only after it safe to assume that no additional communication should happen to/from the application server within the next 10-20 minutes.

Figure: Proper Usage of the Early Release Assistance Feature

Mobile IoT Power Saving Features

The 3GPPTM specification has defined various power saving features which can be used in Mobile IoT solutions to conserve battery power:

  • Release 12 Power Saving Mode (PSM)
  • Release 13 Enhanced Discontinuous Reception (eDRX)
  • Release 13 Early Release Assistance Indication
  • Release 13 Long-Periodic Tracking Area Updates

It is critical to note though that these power saving features must be used according to the specific IoT application use-case. More is certainly NOT better, as they may lead in specific cases to a waste of battery life, or even a failing product design. The table below indicates what combinations of power saving features can be used in different scenarios. Naturally, may of these can be combined, as illustrated in the figure further below.

Figure: Summary of Mobile IoT Power Saving Features

Figure: PSM and eDRX Features Activated Simultaneously

The IoT Solution Optimizer only presents those power saving features for configuration that meet the project criteria. Not only must the mobile operator network support the feature, but also the selected radio chipset or module must also allow for configuration and usage of the functionality. The "AT Command Guide" list generated for each project under "My Projects" elaborates on how each feature can be activated and used by the IoT application.

Extended Discontinuous Reception

Extended Discontinuous Reception (eDRX) is an extension of an existing LTE feature which can be used by IoT devices to reduce power consumption. It was specified in 3GPPTM TS23.682 and TS24.301. eDRX can be used without PSM or in conjunction with PSM to obtain additional power savings. Today, millions of smartphones use Discontinuous Reception (DRX) to extend battery life between recharges. By momentarily switching off the receive section of the radio chipset for a fraction of a second (the interval being controlled by the network-defined DRX Timer parameter, TDRX), these smartphones are able to save power. When the device wakes up, the receiver will listen for the Physical Control Channel. The smartphone cannot be contacted by the network during the period that it is not listening, but if the period of time is kept rather short, the smartphone user will not experience a noticeable degradation of service.

In a similar way, the Extended DRX feature allows the time interval during which a device is not listening to the network to be greatly increased. IoT devices perform the DRX procedure during fixed time windows, called Paging Transmission Windows (PTW), as configured by the IoT application. Between 4-16 paging reception slots can be accommodated with each Paging Transmission Window. The subsequent PTWs are furthermore offset from each other by a second timer, TeDRX, which represents the eDRX Cycle; this timer can also be defined by the IoT application. In between, the receive path of the radio chipset is deactivated. For M2M or IoT applications, it might be quite acceptable for the device to not be reachable for a many seconds, or even hours. Although it does not provide the same levels of power reduction as PSM, eDRX may provide a good compromise for many use cases between device reachability and power consumption.

In summary, there are three features which control how eDRX is configured:

  • TDRX, the duration of a DRX period, as defined by the mobile network operator
  • TPTW, the duration of the Paging Transmission Window, as defined by the IoT application; the PTW controls the number of DRX cycles completed in series before the chipset receive path is deactivated for a longer period until the next PTW
  • TeDRX, the duration between the start of a PTW and the proceeding one, as defined by the IoT application; the interval where the chipset has reduced its power consumption may be seen as a form of "Sleep Mode"

Figure: eDRX Mechanisms and Power Consumption Values

The IoT Solution Optimizer allows users to configure the TPTW and TeDRX values, wherever the selected mobile operator network supports the eDRX functionality.

Tracking Area Updates

Tracking Area Updates (TAU) are sent by the chipset protocol stack at regular intervals to keep the network updated about its location. This is in conformance to 3GPPTM Layer 3 procedures. If the IoT devices fails to send the TAU before the expiration of the TAU timer, the network considers that the device is no longer within its footprint. It is subsequently de-registered from the core network. As long as the device does send the TAU, the data context is kept active and there is no need to perform an ATTACH to the network. With each message sent or received by a device, the TAU timer is reset; the communication from the device is considered to be an implicit tracking area update, thus reducing signaling. The Tracking Area has two main identities:

  • Tracking Area Code (TAC)
  • Tracking Area Identity (TAI)

The Tracking Area Code identifies the tracking area within a particular network. Combining the TAC with PLMN-ID gives the globally unique Tracking Area Identity, TAI (= PLMN + TAC = MCC + MNC +TAC).

Please note that the IoT device's chipset protocol stack will trigger a TAU in any of the following cases:

  • When a device moves to a new Tracking Area which is not included in its list of Tracking Areas with which the it was registered; this becomes relevant for devices that are regularly moving large distances across the mobile network operator footprint, or roaming onto neighboring networks
  • When the T3412 timer expires, the chipset protocol stack triggers the Tracking Area Update procedure; the T3412 timer's value is provided in ATTACH accept from the network
  • To enable, disable, or change the PSM parameters after an ATTACH
  • To enable, disable, or change the Long Periodic TAU Parameters after an ATTACH
  • To enable, disable, or change eDRX parameters after an ATTACH

A major reason for moving IoT applications from legacy technologies, such as 2G or 3G, onto NarrowBand IoT or LTE-M is the availability of a long-periodic timer. This feature allows the interval between TAU events to be extended.

Power Saving Mode

Power Saving Mode (PSM) is specified in 3GPPTM Release 12 to help LTE devices conserve battery power and potentially achieve longer battery lives. The feature was subsequently inherited by the NarrowBand IoT and LTE-M specifications. Although it is possible for a device’s application to shut-down its radio module or chipset to conserve battery power, the device would subsequently need to reattach to the network when the radio is turned back on. This reattach procedure consumes energy that can become significant over time, and generates unnecessary signaling. As such, this procedure should be avoided if it would need to occur too frequently. The alternative is to use PSM to disable parts of the chipset protocol stack and drop power consumption into the micro-Ampere range.

Figure: Using PSM to Conserve Battery Power

When a device initiates PSM with the network, it provides two preferred timers (T3324 and T3412). The time during which the IoT device module or chipset is in so-called "Deep Sleep Mode" is the difference between these timers (T3412-T3324). The network may accept these values or set different ones. The IoT Solution Optimizer indicates to users if the network allows IoT application overwriting of the network defaults. The network then retains the state information and the IoT device remains registered with the network during its hibernation. If a device awakes before the expiration of the time interval to sends data, a reattach procedure is not required.

The drawback of using PSM is that the IoT device cannot be contacted by the network while its module or chipset is asleep. The inability to be contacted may preclude the use of PSM for downlink-centric applications requiring frequent or unscheduled communication to IoT devices (e.g. tracking solutions).

Long-Periodic Tracking Area Updates

The 3GPPTM feature Long-Periodic Tracking Area Updates (TAU), as defined in TS24.301, is used to periodically notify the availability of the IoT device to the network. The procedure is controlled in the device's chipset via a periodic tracking area update timer (T3412), which has an extended version (IE) for NarrowBand IoT and LTE-M.

The benefit of the Long-Periodic TAU is that chipset protocol stack can remain longer in deep sleep mode (refer to Power Saving Mode) before it must wake up to send a TAU message.

Figure: Long-Periodic TAU Activated

Long-Periodic TAU Timer (T3412)

The value of Long Periodic TAU timer (T3412, or T3412ext) is sent by the network to the IoT device in the ATTACH ACCEPT message and can be sent in the TRACKING AREA UPDATE ACCEPT message. The device's chipset uses the received value within all tracking areas in the list of tracking areas it's assigned to, or until a new value is received.

Depending on the network infrastructure capabilities of the mobile network operator, its core network may or may not accept the IoT device's requested value. The IoT Solution Optimizer allows users to configure the parameter only in those cases where the network default value of T3412 can be overwritten.

  • If the timer T3412 received by the IoT device chipset in an ATTACH ACCEPT or TRACKING AREA UPDATE ACCEPT contains an indication that the network timer is deactivated or the timer value is zero, then the timer T3412 is deactivated and the IoT device does not perform periodic tracking area updates.
  • The timer T3412 is reset and started with its initial value, when the UE goes from RRC_CONNECTED to RRC_IDLE mode.
  • The timer T3412 is stopped when the UE enters RRC_CONNECTED mode.

NarrowBand IoT and LTE-M support the use of the T3412 Extended Value IE. This feature allows the T3324 timer to last for approximately 413 days. Depending on the feature support of the chipset and/or network infrastructure, T3412 may be limited to 310 hours. The IoT Solution Optimizer only allows configuration of this time when the network supports overriding its default value with an IoT device-requested value. It also allows the configuration of the timer to the maximum value supported by the selected network(s).

Figure: Using T3412 Timer to Extend Period Between TAUs

PSM Activity Timer (T3324)

The IoT device may request the use of Power Saving Mode (PSM) simply by including a timer with the desired value in an ATTACH, TAU or Routing Area Update Message. The maximum time a device may be reachable after sending this message is 186 minutes; this is the maximum value of the PSM Activity Timer, T3324. By default many mobile network operators set T3324 values between 20 and 30 seconds. Setting the timer to a value lower than 10 seconds may be risky due to the higher latencies and turn-around times that may be experienced by devices in CE-Level 2 (Deep Indoor) coverage conditions. If the device prematurely enters PSM mode, a Downlink message may not be able to be delivered to it.

Upon expiration of the T3324 timer, the chipset or module in the IoT device powers down many of its subsystems to enter "Deep Sleep Mode," where micro-Ampère currents are consumed. The maximum time a device may stay in this hibernation is approximately 413 days (as governed by the 3GPPTM Release 13 T3412 timer). Please note that some operator core networks support shorter T3412 timer values, for example, 310 hours.

Shutting Down the Modem

The shutting-down of a module or chipset is non-standardized approach that may be used to conserve the power of the IoT device. The problem with using Power Saving Mode (PSM) is that this feature only puts the module to an ultra-low power dormant state; the rest of the hardware continues consuming power, including the sensors, actuators, microcontroller, etc. There are specific use cases, where a hard-cut of voltage to the entire device may be the most preferred course of action.

That said, please note that some chipsets are not able to store the list of recently camped-on networks in permanent memory. As a consequence of this limitation:

  • By pulling the current to those chipsets, the updated location of the network is purged from the volatile memory, resulting in a need to perform a full network scan.
  • Depending on the radio access technology and band support of the affected chipset or module, as well as which AT commands may be used to restrict its scanning, the result may be a long scanning period of up to 15 minutes.

IoT application designers must therefore weigh the pros and cons of switching off the IoT device, and then doing a full network scan, versus keeping the device powered with the module or chipset in deep sleep mode. Finally, if devices are regularly powered OFF and ON, the amount of signaling to the mobile network operator's network will increase, as chipset or module must perform a full ATTACH to network to re-register.

Connected Mode Discontinuous Reception

Most Mobile IoT networks offer the Connected Mode Discontinuous Reception (cDRX) feature to reduce power consumption on the IoT device. Unlike Power Saving Mode, Idle Mode DRX, and Enhanced DRX, all of which improve power efficiency during the Idle Mode (when there is no active radio connection in place between the IoT device and network), cDRX (often referred to as "RRC Mode DRX") optimizes power consumption at the device during the Connected Mode, the state during and immediately after the transmission and/or reception of messages. During this period, a Radio Resource Control (RRC) logical connection is maintained with the eNodeB for the duration of a network-configured Activity Timer, allowing for quick transfer of data between the device and the eNodeB (cell site). Depending the timer's duration, it may become mission-critical to support power saving features optimizing Connected Mode, such as the Rel.13 Early Release Assistance Indicator or cDRX. The IoT device and application cannot control this feature; essentially, support is fully-depending on whether the network offers it or not.

Without cDRX, the IoT device's chipset is forced to monitor the Physical Control Channel (PDCCH) in every subframe to check if there is downlink data available. This tends to drain the battery fast. The solution introduced in LTE standardization involves monitoring the PDCCH channel discontinuously; in other words, the IoT device enters sleep and wake cycles. By momentarily switching off the receive section of the radio chipset for a fraction of a second (the interval being controlled by the network-defined DRX Timer parameter, TDRX), IoT devices are able to save power. When the device wakes up, the receiver will listen for the PDCCH. The IoT devicecannot be contacted by the network during the period that it is not listening, but if the period of time is kept rather short, the IoT application will not experience a noticeable degradation of service. Connected Mode DRX is configured by the network in RRC Connection Setup request and RRC connection reconfiguration request.

Figure: cDRX Activated (top) and Deactivated (bottom)

Note: cDRX is not drawn to scale in the figure above.

Mobile IoT

When comparing the various use cases of M2M and Internet of Things solutions requiring deployments on a wide scale, several key characteristics and needs are shared among the diverse industries, ranging from a need for low-cost devices, best-effort data transmissions, and throughputs in the lower bit rate ranges. Solutions fitting into this pattern can leverage Machine Type Communications (MTC) wide area network connectivity technologies classified as Low Power Wide Area (LPWA). 3GPPTM standardizes these technologies under the designation "Mobile IoT."

Figure: Mobile IoT Application Characteristics and Types

Figure: Mobile IoT Technologies in the M2M / IoT Business

NarrowBand IoT (NB-IoT) and LTE-M are the two standardized Mobile IoT solutions available to the industry. Several targets were defined by the specifications body in order to ensure efficiency, robustness, and both compatibility and interoperability with existing mobile network operator deployments worldwide:

  • Minimization of signaling overhead, especially over the radio interface
  • End-to-end security for the complete system, including the mobile network operator's core network
  • Improved battery life
  • Support for delivery of both IP and Non-IP data
  • Support of SMS as a deployment option

The benefits of NB-IoT and LTE-M, as compared to Sigfox and LoRA, include:

  • Standardized technology
  • 3GPPTM-based security built-in
  • Use of licensed spectrum
  • Reuse of existing mobile network operator infrastructure and processes
3rd Generation Partnership Project (3GPP™)

The 3rd Generation Partnership Project (3GPPTM) is an alliance of seven telecommunications standardization organizations, or Organizational Partners (ARIB, ATIS, CCSA, ETSI, TSDSI, TTA, TTC), providing a platform to develop reports and specifications defining international 3GPPTM telecommunications technologies. 3GPPTM was originally created in 1998 with the signing of the "The 3rd Generation Partnership Project Agreement." Its initial mission was to develop the Technical Specifications and Technical Reports for a global 3G Mobile System based on the deployed, evolved GSM core networks and the radio access technologies that they supported (i.e., Universal Terrestrial Radio Access (UTRA), with both Frequency Division Duplex (FDD) and Time Division Duplex (TDD) modes).

During the subsequent years, 3GPPTM activities have been responsible for various generations of cellular telecommunications network technologies, from radio access, core transport network, to service capabilities. The latter includes detailed specifications for codecs, security and quality of service implementation. Interworking hooks are also in scope for non-radio access, core network, and WiFi Local Area Networks. By maintaining and ensuring backwards compatibility across the technologies, 3GPPTM safeguards the continuous evolution of the industry in a way that does not interrupt operation of user equipment. This means that even the legacy 2G Global System for Mobile communication (GSM) predating 3GPPTM are managed through Technical Specifications and Technical Reports. This also includes the 2G extensions for General Packet Radio Service (GPRS) and Enhanced Data rates for GSM Evolution (EDGE). Today, billions of customers worldwide use 3GPP™ technologies for their communication needs, and with its 2G, 3G, 4G (LTE) and Mobile IoT (NarrowBand IoT, LTE-M) releases, 3GPPTM forms a solid foundation for enabling a Wide Area Network (WAN) connectivity layer in the emerging Internet of Things (IoT).

Massive IoT

Massive IoT describes use-cases or applications for the Internet of Things (IoT) that deploy sensors, devices or machines at large scales, in which either a communication at a high frequencies is not required, or the low-latency is not critical. Unlike Critical IoT, high performance for such applications is not required; an efficient handling of mass devices throughout their lifecycle is rather more important. Typical Massive IoT verticals include among others smart cities, smart metering, condition monitoring, etc. Any low-mobility use cases relying upon an infrequent, simple communication at low rates, where the hardware battery is expected to serve up to 10 years of life. Often a good coverage in wide areas as well as deep indoor penetration is required for realization of certain use cases, whenever the devices or sensors are located in basements or more remote areas. The emergence of Massive IoT has lead to the development of LPWA technologies such as e.g. 3GPPTM standardized Mobile IoT technologies, NarrowBand IoT (NB-IoT) or LTE-M (eMTC), which were specially designed to fulfill the requirements of such applications. Many classical use cases will still likely rely on tradtional LTE (Category 1+) due to bandwidth and higher data rate requirements.

Figure: Critical IoT vs. Massive IoT

Narrowband IoT (NB-IoT) Radio Access Technology NarrowBand IoT (NB-IoT)

Networked objects are the foundation of an intelligent world. But what will the basis of this Internet of Things (IoT) be? The right network with the right technology.

NarrowBand IoT (NB-IoT) is one of the two Low Power Wide Area (LPWA) technologies defined by 3GPPTM under the category "Mobile IoT." These technologies cater to the fact that many IoT applications only send small payloads data occasionally, at low data rates, and can tolerate Latency. Simplified wireless modules providing stripped-down and focused functionality make NarrowBand IoT affordable and energy-efficient. This, in turn, enables deployment of IoT applications in many areas where networking was previously too expensive and difficult. NB-IoT coverage is currently offered via dozens of Mobile Operator Networks across the globe.

Figure: Comparison on NB-IoT and LTE-M

Among the various benefits of this technology, three clear differentiations stand out:

Deep Indoor Penetration:

  • Higher power density: Radio transmission concentrated onto a narrower carrier bandwidth of just 180kHz, integrated into the LTE carriers of commonly deployed frequency bands, worldwide
  • Coverage Enhancement (CE) feature uses repetitions of transmission signals:
    • Outdoor Coverage (CE Level 0): +0dB (no repetitions)
    • Indoor Coverage (CE Level 1): +10dB
    • Deep Indoor Coverage (CE Level 2): +20dB

Low Energy Consumption:

  • Optimized chipset design focusing on relevant radio features (e.g. no MIMO)
  • Reduced signaling and a more efficient data transfer directly over control channel (DoNAS)
  • Power Saving Mode (PSM) ensures a very low energy consumption (consuming only a very low current of a few μA) and ensures data sessions remain registered while devices sleep
  • Long Periodic TAU extends sleep duration between tracking messages up to 2 weeks
  • Extended Discontinuous Reception (eDRX) enables longer low-energy paging ("listening")

Lower Module Costs:

  • Only single-stream transmissions (Half-duplex)
  • No voice support
  • Unnecessary LTE features not supported (e.g. no carrier aggregation, dual connectivity or device-to-device services)
  • Intra-RAT not needed (interaction with other radio access technologies, e.g. GSM, 3G, LTE)
  • Device only requires a single antenna

Suitable use cases for NarrowBand IoT include:

  • High numbers of IoT devices
  • Low data rates
  • Infrequent data transmission
  • Latency is uncritical
  • Deep indoor penetration
  • Low power consumption / long battery life
  • No external wake-up function needed

The protocol stack of NB-IoT is optimized for minimizing signaling overhead, especially over the radio interface. 3GPPTM has specified the following elements:

  • (Layer 1) Physical Layer, including physical channels and modulation, physical layer procedures, as well as multiplexing and channel coding
  • (Layer 2) Medium Access Control (MAC)
  • (Layer 2) Radio Link Control (RLC)
  • (Layer 2) Packet Data Convergence Protocol (PDCP)
  • (Layer 3) Radio Resource Control (RRC) Protocol
  • (Layer 3) Network Access Stratum (NAS) Control Plane Protocol
  • User Equipment (UE) radio transmission and reception, procedures in Idle mode

These specifications have been designed with the classical LTE physical layer in mind in order to ensure compatibility between the two radio access technologies (RAT).

Figure: NarrowBand IoT Protocol Stack (Blue)

Due to its efficient design, it is recommended to use MQTT-SN or CoAP as the protocol for managing devices connected to the server / cloud over a NB-IoT network. MQTT-SN has the additional benefit that it can also be used in Non-IP transport, i.e. IoT device application payload can be sent directly over Control Plane NAS messages to the network, where wrapping of the data on an IP bearer for outbound Internet traffic is done.

NB-IoT Latency

The term “latency” relates to one-way, downlink or uplink communication. It does not include the time needed to establish a connection between the server and the IoT device. If the device is in Power Saving Mode, it will only receive or send data after it wakes up.

In NarrowBand IoT, applications are designed to be latency-tolerant. 3GPPTM has not specified this radio access technology to cater delay-sensitive or real-time services.

  • According to measurements on current test setups with pre-commercial radio modules, data transmissions typically exhibit a latency of less than a second.
  • In scenarios involving deployments in poor coverage areas, latency may increase up to 7-10 seconds. Generally, the latencies increase with increasing of CE levels.

That said, NB-IoT technology is continuously being enhanced, so current figures are expected to evolve with successive releases of the specification.

NB-IoT Mobility

Although NarrowBand IoT does not have handover functionality like 2G/3G/LTE, it still supports mobility for applications. NB-IoT is designed for infrequent and short messages between the UE and the network. It is assumed that the IoT device can exchange these messages while being served from one cell.

Figure: State Model for IoT Devices in NarrowBand IoT

The NB-IoT state model is simplified, as there are no transitions to associated LTE, UTRA and GSM states. To make NB-IoT more efficient and cost-effective for Mobile IoT communication, the LTE protocols have been reduced to a minimum and enhanced, avoiding overhead from unused LTE features. Consequently, the NB-IoT technology can be regarded as a new air interface also from the protocol stack point of view.

As illustrated by the NB-IoT protocol stack, devices having an active RRC radio bearer to the mobile network operator's radio access network eNodeB (eNB) are considered to be in Connected state (RRC_CONNECTED). When the RRC bearer is torn down, the IoT device is considered to be in Idle state (RRC_IDLE). In both states, the device maintains registered on the mobile network operator's core network.

While the IoT device finds itself in RRC_CONNECTED state, handover procedures are neither needed, nor supported. If a cell change is required, the IoT device first goes to the RRC_IDLE and re-selects another cell, intra-frequency or inter-frequency. This process is referred to as "cell reselection." Type types of cell reselection are possible:

  • Intra-frequency cell reselection refers to the same 180 kHz carrier, but used by a different cell
  • Inter-frequency cell reselection refers to another 180 kHz carrier, even if both carriers are embedded (in-band operation) into the same LTE carrier

Figure: Two Types of Cell Reselection in NB-IoT

Based on the M2M or IoT mobility use case, it can be determined whether NB-IoT is suitable or not, i.e. it does not support voice or streaming; Nevertheless, most of the use cases rely on data transmission consisting of separate message packets, thus latency is not critical.

Here two scenarios exist:

  • Scenario 1 – New cell is the in the same tracking area (TA): The IoT device scans the new cell for a free resource to transmit on and logs into the new cell instantly (cell reselection). This process relatively doesn’t consume much energy or take a lot of time, as compared to Scenario 2.
  • Scenario 2 – New cell is in a new tracking area: The IoT device recognizes the new cell as being in a new tracking area, thus it carries out a tracking area update. This involves also downloading a new PDP context which relatively takes more time and consume higher energy, as compared to Scenario 1.
NB-IoT Data Transmission

In NarrowBand IoT, both up- und downlink transmissions are possible in series. In the 3GPPTM Release 13 specification, Frequency Division Duplex (FDD) Half-duplex Type-B has been adopted:

  • Uplink and downlink are separated in the frequency domain and the IoT device (User Equipment, UE) either receives or transmits, but does not perform this simultaneously
  • Between every switch from uplink to downlink, or vice-versa, there is at least one guard subframe (SF) in-between, so that the IoT device has time to switch between its transmitter and receiver chains.

Figure: FDD Frequencies for Transmission and Reception

NB-IoT Data Rates

Given the carrier and channel structure of NarrowBand IoT, it is possible to determine the maximum theoretical bit rates and throughputs, whereby the industry intent is not to offer ever increasing throughputs; for the latter, LTE-M (eMTC) and LTE Cat.1 are best suited options.

Uplink (UL):

  • Peak rate of up to 230 kbps (average 63 kbps)
  • Data transfers are possible at any time

Downlink (DL):

  • Peak rate up to 250 kbps (average 21 kbps)
  • Data transfers are only possible when device is not in Power Saving Mode (sleep mode)

Peak data rates are achieved using multitone transmission. NB-IoT networks that only support single tone transmission offer peak data rates of ca. 20 kbps.

NB-IoT Data over NAS (DoNAS) The NarrowBand IoT protocol offers a lightweight, convenient way to transfer small payloads of IoT application data directly over the Control Plane, or Network Access Stratum (NAS) layer. Referred to as Data over NAS (DoNAS), this leverages the logical bearer set-up between IoT device and MME (Core Network). The NAS layer, in turn, sits on an Radio Resource Control (RRC) bearer between device an the eNB, which can be set-up and removed periodically. After the device or the eNB tears down the RRC bearer, the latter can be set-up again via a paging procedure (on the downlink) or Radom Access procedure (on the uplink). NB-IoT Frequency Bands

The same frequency bands as in LTE are used for NarrowBand IoT, with a subset defined in 3GPPTM Release 13.

Most frequencies are in the lower range of the existing LTE bands, reflecting that for MTC, poor coverage conditions is a concern. Most operators deploy NB-IoT in Europe on Band 3, Band 8, and 20. The technology is deployed using FDD Half-Duplex, in a carrier structure compatible with LTE.

Figure: NB-IoT Frequency Bands

NB-IoT Carrier Structure

An important feature of NarrowBand IoT is that it shares the same numerology as LTE. This allows spectrum to be shared between the two systems without causing mutual interference. NB-IoT carriers are specified to be 180 kHz, allow them to be inserted into LTE carriers, in the so-called "In-Band" deployment, in lieu of an LTE Physical Resource Block (PRB), which is simply decomissioned. This is fully aligned with the LTE channel structure with 12 tones of 15 kHz per physical resource block (PRB), thus a NB-IoT carrier and a LTE PRB are equal in bandwidth structure. The orthogonality between the NB-IoT PRB and all the other LTE PRBs can thus be preserved. Alternatively, the 180 kHz carrier may inserted in the spectrum adjacent to the first or last PRB, in "Guardband" deployment. Naturally, the option exists to deploy NB-IoT independently from LTE spectrum, e.g. in GSM-dedicated spectrum; this is referred to as "Stand-Alone" mode.

Figure: NB-IoT Operation Modes

Figure: NB-IoT In-Band Deployment of Four Carriers (PRB)

The 180 kHz bandwidth of a single NB-IoT carrier in the downlink (DL), used for communication originated by the eNodeB to the IoT device, is comprised of 12 sub-carriers (tones) of 15 kHz bandwidth, each. Orthogonal Frequency Division Multiplex (OFDM) with legacy QPSK and tail-biting convolutional coding are used for modulation and forward error correction.

On the uplink (UL) communication originated by the IoT Device to the eNodeB, NB-IoT requires IoT device support for subcarriers (tones) of 48 Single Tones within 180 kHz of UL spectrum. These can use pi/4 QPSK or pi/2 BPSK, with turbo-coding. Optionally, some mobile network operators may support 3, 6 or 12 Multi Tones of 15 kHz or 3.75 kHz bandwidth, with QPSK modulation.

NB-IoT Coverage Enhancement (CE)

Providing "Deep Indoor" coverage is an important aspect of NB-IoT, which is essential for IoT application requiring devices to be positioned in areas not readily accessible by 2G, 3G, or LTE coverage, such as in the basements of buildings, parking garages, etc. This is achieved by repeating Layer 3 (RRC, NAS) messages a predefined number of times, thereby increasing the probability of receivers to correctly receive and demodulate the message.

To optimally cope with different radio conditions, 3GPPTM Release 13 has defined three Coverage Enhancement (CE) Levels: CE-Levels 0, 1 and 2. The number of repetitions in each CE-Level is predefined by the network. The CE feature essentially increases maximum coupling loss from 144dB to 164 dB:

  • CE-Level 0 (CE0): 0 dB gain vs. GSM signal (used when coverage is good)
  • CE-Level 1 (CE1): Typically 10 dB gain vs. GSM signal
  • CE-Level 2 (CE2): Typically 20 dB gain vs. GSM signal (used if coverage poor)

The different CE-Levels determine the number of times downlink and uplink messages are repeated to reach devices in poor coverage. Data transmissions and signaling can be repeated either 1, 2, 4, 8, 16, 32, 64, or 128 times, depending on the network's configuration. A higher power density (of 23 dBm) is also used in CE-Level 1 and CE-Level 2 instead of power control. The use of a higher transmit power and retransmissions of messages naturally comes at a cost to the IoT device, namely, a significant reduction in expected battery life.

Figure: Power Consumption Comparison, CE0 vs. CE2

In order to govern the usage of CE-Levels, the mobile operator network may broadcast the boundary conditions for each CE-Level in the System Information Block (SIB). For example, CE-Level 0 may be used when the received signal strength at the IoT device's radio module is better than -114 dBm, CE-Level 1 whenever the signal strength lies between -114 dBm and -124 dBm, and CE-Level 2 when it is worse than -124 dBm. Through direct RF measurements, the device will compare its received signal strength against these boundary conditions and selects the corresponding CE-Level. Please note that some chipset suppliers may implement proprietary extensions in their protocol stack modifying the CE-Level selection criteria.

Figure: Coverage Enhancement in a NB-IoT Network

NB-IoT Outdoor Coverage

Coverage Enhancement (CE)-Level 0 corresponds to the normal mode of operation, without repetitions and no coverage gain vs. GSM. This CE-Levelis typically used by devices which are outdoor, or in good coverage areas. The IoT devices can also use the advantage of power control in this CE-Level to dynamically adapt their average transmit power, based upon received signal RSRP measurements. The result is a significant drop in battery consumption.

It is expected that devices used in IoT applications such as fleet management, e-bikes, smart street lighting, smart bins, etc., are likely most of the time in CE-Level 0 conditions due to the good coverage outdoors. CE-Level 0 corresponds to received RSRP values better than -114 dBm.

Please note: For LTE-M, CE-Level 0 corresponds to CE-Mode A.

Figure: Coverage Extension Levels in Buildings

NB-IoT Indoor Coverage

Coverage Enhancement (CE)-Level 1 typically gives a 10 dB gain by repeating the messages a few number of times. This CE Level will be used by devices which are indoor where the coverage is poor. IoT applications deployed indoors, such as smoke detectors and other smart home applications would likely be in CE-Level 1 due to relatively poor coverage found in these environments. CE-Level 1 corresponds to received RSRP values between -114 dBm and -124 dBm.

Please note: For LTE-M, CE-Level 1 corresponds to CE-Mode A.

Figure: Coverage Extension Levels in Buildings

NB-IoT Deep Indoor Coverage

Coverage Enhancement (CE)-Level 2 typically gives a 20 dB gain by repeating the message multiple times (much higher than that of CE-Level 1). This Coverage Enhancement Level is used by devices which are in deep indoor areas like basements, underground garages, etc., or at the edge of network coverage outdoors. IoT Devices used in parking, meter applications buried into the ground, as well as energy or water meters in the basement of buildings, are but a few examples where CE-Level 2 would probably be used. Received RSRP values worse than -124 dBm are considered to be Deep Indoor.

Please note: For LTE-M, CE-Level 2 corresponds to CE-Mode B.

Figure: Coverage Extension Levels in Buildings

NB-IoT Category NB1

LTE Category NB1 refers to a Long-Term Evolution (LTE)-technology protocol class defined in 3GPPTM Release 13. This category was designed specifically to address the needs of Massive IoT use cases by:

  • Stripping down unnecessary LTE features
  • Optimizing power consumption
  • Reducing complexity of IoT devices (e.g. only one antenna is needed for NB-IoT, as opposed to two antennas for higher-category LTE

IoT devices using LTE Cat-NB1 communicate using half-duplex mode and offer data rates of up to 250 kbps in the Downlink and up to 230 Kbps in Uplink. They can therefore address numerous and diverse IoT use cases for Mobile IoT requiring lower throughput, tolerating higher latencies, and less communication than LTE-M (eMTC). Devices enabled with NB-IoT also have up to 20dB coverage extension (CE) versus GSM using so-called "CE Level 2". A narrow spectral bandwidth of 180 kHz is used to communicate over standard LTE spectrum bands. IoT devices supporting Rel.13 NB-IoT can transmit using Power Class 3 (23 dBm) or Power Class 5 (20 dBm) maximum power.

Most NB-IoT mobile networks worldwide support the NB1 standard. IoT devices with either NB1- or NB2-capable wireless communication chipsets/modules can operate on these networks.

NB-IoT Category NB2

Category NB2 refers to a Long-Term Evolution (LTE)-technology protocol class defined in 3GPPTM Release 14. Among the key enhancements brought to IoT devices and networks supporting LTE Category NB2, are the following features:

Data Rate Improvements

  • Max Transport Block Size (Uplink/Downlink) of 2536 bits, up from 1000/680 bits (UL/DL)
  • Optional support of 2 HARQ processes with Transport Block Sizes of 1352/1800 bits (UL/DL)
  • Peak data rates on standalone non-anchor carriers increase to 105/80 kbps (UL/DL) kbps for 1 HARQ, up from 60/25 kbps (UL/DL)
  • Lower latency because of higher data rates

Introduction of Non-anchor NB-IoT Carrier

  • Both anchor and up 15 non-anchor carriers can be used for paging and PRACH procedures
  • Network can support more devices per square kilometer

Device Positioning

  • OTDOA (Observed Time Difference of Arrival) support based on the NB-IoT Positioning Reference Signal (NPRS) with position accuracy of up to 50 meters
  • E-CID support
  • Measurements are done only in Idle mode
  • Device measures the arrival time of different NPRS in idle mode from base stations; a related application in the network then calculates the time difference and position

New Power Class

  • Support for smaller batteries (coin sizes) with new Power Class 6 (14 dBm)
  • Relaxed MCL of 155 dB (e.g wearables where higher coverage is not necessary)

Group Messaging & Updates (Multicast)

  • Adaptation of NB-IoT Rel.13 single cell to multipoint SC-PTM feature
  • Max TBS value for NPDSCH of 2536 bits in Idle mode
  • Relevant where one message needs to be transmitted to multiple devices in a cell, as well as for firmware/software updates

Connected Mode Mobility

  • Mobility is realized by RRC connection re-establishment triggered by radio link failure
  • Rel.14 Release Assistance Indication (RAI) feature improves power consumption

Please note that most NB-IoT mobile networks currently do not support Category NB2 features. Local implementations should be treated as "feature islands" in the global NB-IoT coverage footprint in case of roaming agreements. All IoT devices with chipset/modules using this release of the NB-IoT standard are backwards-compatible with Rel.13 NB-IoT networks.

NB-IoT Global Coverage Mobile Network Operators are deploying NarrowBand IoT (NB-IoT) networks globally across most continents. Please refer to the latest global coverage map from the GSMA to learn which regions are covered. The IoT Solution Optimizer will gradually include operators from many of these regions. For in-country coverage maps, please contact your local operator for more information.
LTE-M (eMTC) Radio Access Technology LTE-M (eMTC)

Networked objects are the foundation of an intelligent world. But what will the basis of this Internet of Things (IoT) be? The right network with the right technology.

LTE-M (eMTC) is one of the two Low Power Wide Area (LPWA) technologies defined by 3GPPTM. LTE-M coverage is offered by numerous Mobile Network Operators around the globe. These technologies cater to the fact that many IoT applications only send small payloads data occasionally, at relatively low data rates, and some tolerance for latency. LTE-M is an integral part of the LTE specification. Its name refers to its placement as a proper LTE category (Category M), specifically designed to address the needs of IoT by stripping down unnecessary LTE features, optimizing power consumption and reducing complexity of the UE's (i.e. just one antenna for LTE-M as opposed to two antennas as for higher LTE categories). Its focused functionality makes LTE-M both affordable and energy-efficient. Its standardized character means that it inherits the security features offered by LTE in its other categories, and it can be deployed in licensed spectrum, ensuring Quality of Service. These facts, in combination, enable deployment of IoT applications in many areas where networking was previously too expensive and difficult.

Common IoT applications which may leverage LTE-M include:

  • Asset Tracking: Realtime location tracking of children, pets, and elderly people
  • Health: Monitoring the health status information of patients
  • Wearables: Connection of smart watches, hearables, fitness trackers, and glasses
  • Security: Voice calls e.g. in alarm panels, elevators, cars, etc.
  • Device control: Industrial handhelds, vending machines)
  • White Goods: Consumption monitoring (B2B) and smart control (B2C) of white goods

Among the various benefits of this technology, three clear differentiations stand out:

Lower Module Costs

  • Optimized chipset design focusing on relevant radio features (e.g. no MIMO)
  • Reduced signaling and a more efficient data transfer directly over control channel (DoNAS)
  • Power Saving Mode (PSM) ensures a very low energy consumption (consuming only a very low current of a few μA) and ensures data sessions remain registered while devices sleep
  • Long Periodic TAU extends sleep duration between tracking messages up to 2 weeks
  • Extended Discontinuous Reception (eDRX) enables longer low-energy paging ("listening")

Deep Indoor Coverage

  • Higher power density: Radio transmission concentrated onto a narrow carrier bandwidth of just 1.4 MHz, integrated into the LTE carriers of commonly deployed frequency bands, worldwide
  • Coverage Enhancement (CE) Mode A and Mode B feature uses repetitions of transmission signals

Low Enegery Consumption

  • Only single-stream transmissions (Half-duplex)
  • Voice support (VoLTE-M)
  • Unnecessary LTE features not supported (e.g. no carrier aggregation, dual connectivity or device-to-device services)
  • Intra-RAT not needed (interaction with other radio access technologies, e.g. GSM, 3G, LTE)
  • Device only requires a single antenna

Figure: Comparison of LTE-M with NarrowBand IoT

How does LTE-M compare to other technologies?

LTE-M offers higher bandwidth and data rates and decreased latency, as compared to NarrowBand IoT. Likewise, LTE-M offers a longer battery life-time and better indoor coverage versus LTE Category 1. For mission-critical applications, LTE-M is the LPWA technology of choice. It supports devices that need to communicate in real time to ensure the application meets user-experience requirements. Some examples of real-time communication include voice, emergency data and precision tracking data. The latency of LTE-M (< 50 msec without Coverage Enhancement, < 1 sec with Coverage Enhancement) is much better when compared to NB-IoT (< 10 sec). LTE-M additionally supports cell handover for true mobility during prolonged data transfers. Both IP and Non-IP Data Delivery are supported. The battery lifetime is better for LTE-M as compared to the higher LTE categories, yet it is expected to be a bit lower than for NB-IoT. Generally, it is a function of the power saving features that applications activate (Power Saving Mode, Enhanced DRX, Long Periodic TAU), the amount of data which needs to be transferred, and the number of messages that are sent. Likewise, Deep Indoor coverage penetration is enabled by robust modulation techniques and repetitions of transmission. Depending on the coverage situation of the device, up to four different Coverage Extension levels (CE0, CE1, CE2, CE3) may be used:

  • CE0 & CE1 are referred to as "Mode A"
  • CE2 & CE3 are referred to "Mode B"

Generally speaking, whenever a customer needs higher data rates, higher data volumes, and lower latency, as compared to NarrowBand IoT, LTE-M is the technology of choice for their LPWA service. Mobile use cases are also better covered due to cell handover functionality within the LTE-M framework.

LTE-M Category M1

LTE Category M1 refers to a Long-Term Evolution (LTE)-technology protocol class defined in 3GPPTM Release 13. This category was designed specifically to address the needs of IoT by:

  • Stripping down unnecessary LTE features
  • Optimizing power consumption
  • Reducing complexity of IoT devices (e.g. only one antenna is needed for LTE-M, as opposed to two antennas for higher-category LTE).

IoT devices using LTE Cat-M1 communicate using half-duplex mode and offer data rates of up to 375 kbps in the Downlink and up to 300 kbps in Uplink. They can therefore address numerous and diverse IoT use cases for Mobile IoT requiring higher throughput and more frequent communication than NarrowBand IoT (NB-IoT). Devices enabled with LTE-M also have up to 15dB coverage extension (CE) using so-called "CE Mode B" and 5dB in "CE Mode A." A narrow spectral bandwidth of 1.4 MHz is used to communicate over standard LTE spectrum bands.

All operator networks worldwide offering LTE-M are currently deployed to support Rel.13 LTE-M. Accordingly, there are no wireless communication chipsets/modules on the market supporting Rel.14 (Category M2).

LTE-M Latency Compared to other Mobile IoT technologies such as NB-IoT, LTE-M provides much better latency, as low as 50ms under favorable coverage conditions, and less than 1 second in Indoor coverage. For latency-sensitive IoT applications, LTE-M would therefore be a technology of choice, providing latency performance in good coverage that is comparable to LTE. LTE-M can support devices that needing to communicate in near-real-time to ensure meeting user experience requirements, such as for voice communication, emergency data, or precision tracking data. LTE-M Mobility

LTE-M is optimized for data mobility use cases, where handovers between serving cell towers are required during prolonged data transfers (much like high speed LTE). This is a key differentiator as compared to NarrowBand IoT. Two main mobility modes are supported:

  • Idle Mode Mobility: The IoT device's chipset is in control and decides when to perform a cell reselection.
  • Connected Mode Mobility: The Mobile Network Operator's network controls the the IoT device's mobility, and decides when it shall be moved to another serving cell, thereby triggering the handover procedure.

In this context, 3GPPTM Release 14 supports both intra-frequency and inter-frequency handovers. Intra-frequency refers to switching serving cells, but remaining on the same frequency, whereas inter-frequency requires a switch of frequencies, either within the same cell, or to another.

LTE-M Data Transmission

LTE-M technology is currently deployed using Frequency Division Duplex (FDD) Half-duplex, in a carrier structure compatible with co-located LTE deployments. By using Half-duplex mode, LTE-M devices can offer data rates of up to 375Kbps in the Downlink and up to 300Kbps in Uplink.

In Half-duplex, Uplink and Downlink traffic are separated in frequency and the IoT device either receives or transmits at any specific time, however, not both simultaneously. Between every switch from Uplink to Downlink, or vice versa, there is at least one guard subframe (SF) in between, so that the IoT device's chipset has time to switch between its transmitter and receiver chains.

Figure: Frequency Division Duplex - Half-duplex

LTE-M Frequency Bands

LTE-M uses the same frequency bands as defined for standard LTE. This ensures that licensed spectrum is used and that maximum spectral efficiency in the network's handling of multiple radio access technologies is achieved.

  • 3GPPTM Release 13 specification provides the list of the supported LTE spectral bands: 1, 2, 3, 4, 5, 7, 8, 11, 12, 13, 18, 19, 20, 26, 27, 28, 31, 39, 41;
  • 3GPPTM Release 14 added the LTE spectral bands: 25 and 40.

In Europe, most operators deploy LTE-M on Band 3, 8, and/or 20. Generally speaking, the following bands must be covered in global modules in order to cover North America, Latin America, Europe and parts of Asia:

  • Bands 1, 2, 3, 4, 5, 12, 13, 20, 25, 26, 28.
LTE-M Coverage Enhancement (CE)

Indoor coverage penetration is enabled with LTE-M by using robust modulation techniques and redundancy (repetitions of transmissions). Depending on the coverage situation that the device finds itself in, up to four different Coverage Extension (CE) levels can be used (CE Level 0, CE Level 1, CE Level 2, CE Level 3). The higher the CE Level is, the following applies:

  • More repetitions of the transmitting signal are required
  • Higher energy consumption is required of the device
  • A lower throughput is provided
  • A higher latency is observed

CE Level 0 and CE Level 1 are grouped under the term Coverage Enhancement "Mode A," with up to 32 repetitions in CE Level 1. CE Level 2 & CE Level 3 are referred instead as CE "Mode B," with up to 2048 repetitions in CE Level 3. CE Mode A provides a coverage enhancement of 5dB as compared to standard LTE (e.g. using LTE Category 1). CE Mode B is being phased in gradually across LTE-M networks, providing for a coverage enhancement of up to 15dB, as compared to standard LTE.

LTE-M CE Mode A A Coverage Enhancement (CE) Mode refers to specific Coverage Enhancement Levels, defined by the number of message repetitions being supported. CE "Mode A" refers to CE Level 0 and CE Level 1, with up to 32 repetitions in CE Level 1. It is the default mode of operation for LTE-M devices and LTE-M networks, providing efficient operation in coverage scenarios where moderate coverage enhancement is needed. CE Mode A is designed to maintain the LTE-M advantages of higher data rates, voice call possibility, and connected mode mobility. It is supported during the launch of LTE-M at many operators globally, providing a coverage enhancement of 5dB compared to standard LTE (e.g. using LTE Category 1). LTE-M CE Mode B The Coverage Enhancement (CE) Mode refers to specific Coverage Enhancement Levels, as defined by the number of message repetitions being supported. CE "Mode B" refers to CE Level 2 and CE Level 3, with up to 2048 repetitions in CE Level 3. CE Mode B will not be supported initially by many LTE-M operators, but will be added at a later stage of the network deployments, providing for a coverage enhancement of up to 15dB compared to standard LTE. LTE-M Voice over LTE (VoLTE)

As LTE-M can provide similar Quality of Service as LTE under standard coverage conditions (with respect to latency), Voice over an LTE (VoLTE) can in principle be supported over an LTE-M bearer. This voice capability may indeed be useful for example with smart watches or for security solutions where emergency calls in elevators or cars must be triggered. From a standardization perspective, voice support capability may be provided in Coverage Enhancement Mode A (i.e. CE Levels 0 and 1) only. A higher latency must be expected when LTE-M devices are under bad coverage conditions (CE Level 1), which may have a significant impact on the voice quality. For this reasons, Voice over LTE will initially not be supported on numerous LTE-M networks during their early deployment.

Many Mobile Network Operators (MNO) plan to perform performance and quality assessments shortly after launch to ensure quality assurance and issue recommendations in terms of technical prerequisites (e.g. CE Mode configuration and interoperability on LTE-M radio communication modules).

LTE-M Global Coverage Mobile Network Operators are deploying LTE-M (eMTC) networks globally across most continents. Please refer to the latest global deployment map from the GSMA to learn which regions are covered. The IoT Solution Optimizer will gradually include operators from many of these regions. For in-country coverage maps, please contact your local operator for more information. LTE-M Carrier Structure

LTE-M is based on the LTE 3GPPTM standard, which allows it to re-use licensed, deployed LTE frequency bands. Compared to a higher-category LTE carriers, LTE-M only requires 1.4 MHz of spectrum (including guard bands). I thereby consumes a maximum of six Physical Resource Blocks (PRBs), or 1.08 MHz.

A powerful aspect of LTE-M technology is that its traffic can be multiplexed within a normal LTE wideband carrier. This means that a Mobile Network Operator deploying both technologies can share the available cell capacity between standard LTE and LTE-M communication.

LTE-M Category M2

LTE Category M2 refers to a Long-Term Evolution (LTE)-technology protocol class defined in 3GPPTM Release 14. The Cat-M2 protocol additionally supports full-duplex FDD, thereby delivering higher data rates as compared to Cat-M1, with up to around 2.5Mbps on Downlink and Uplink (as compared to 375 kbps in Downlink and up to 300 kbps in Uplink for Cat-M1). This ensures faster data transfers and lower battery consumption in good network coverage. Devices enabled with LTE Cat-M2 also support higher peak rates using 2 HARQ-processes for Upload and Download, as opposed to LTE Cat-M1's single HARQ-process.

Currently, there are no operator networks or wireless communication chipsets/modules supporting the Cat-M2 specification.

Mobile IoT Features Network-based Localization Enhanced Cell-ID (E-CID)

To improve the accuracy of Cell-ID positioning, certain network attribute measurements made by the IoT device and/or the serving mobile network base station (eNodeB) can be utilized in addition to the geographical knowledge of a device’s serving cell. This technique is called Enhanced Cell ID (ECID, for short). In ECID, the Round Trip Time (RTT) between the eNodeB and the device is used to estimate the distance to the between both, giving a much butter estimate of the device’s location compared to simply taking the whole cell radius. The RTT is determined by analyzing Timing Advance (TA) measurements, either from the eNodeB or by directly querying the device. With this technique, ECID is able to provide better accuracy in comparison to CID, typically around 150 meters, or coarser.

Figure: Enhanced Cell-ID

Observed Time Difference of Arrival (OTDOA)

The principle behind Observed Time Difference of Arrival (OTDOA) positioning is similar to GPS a multilateration positioning method. OTDOA is based on the IoT device's wireless communication chipset or module measuring the Time Different Of Arrival (TDOA) between so called Positioning Reference Signals (PRS) of a neighbour cell and those of the serving cell. The PRS Signals are transmitted from a set of time-synchronized base station surrounding the device. Each measurement allows a mobile network positioning server (the E-SMLC) to determine the position of the device to a hyperbola centered around the base stations transmitting the downlink signals. If measurments between three or more base stations are reported, the positioning server will be able to determine multiple hyperbolas and fix the position of the device to the intersection of the hyperbolas. In order to be able to calculate the UE’s location, the network needs to know accurately the locations of the eNodeB transmit antennas and the transmission timing of each cell. Although it is very similar to GPS, OTDOA positioning does not suffer from long time-to-first-fix (TTFF) like GPS, and delivers accuracies in the range of 50-200 meters in typical cases.

Figure: Observed Time Difference of Arrival

Network-based Localization

The ability to both locate an object and to communicate with it is a combination enabling a wide range of location-based services. The most straightforward method by which a device can know its location is to use a GPS or GLONASS receiver operating completely independent of the cellular network; however, these widely-used Global Navigation Satellite System (GNSS) positioning methods are not suitable for Massive IoT due to the power consumption and cost of GNSS chips. Furthermore, IoT devices may be in locations without satellite coverage (e.g. indoors, encapsulated, or underground), thus not serviceable with GNSS solutions. Indoor environments and dense urban areas are examples where GNSS solutions often fail. As such, the 3GPPTM has invested a significant effort during its Release 14 to enhance positioning support for multiple IoT technologies. What all location-based services have in common is finding where the mobile device is actually located. The Release 14 specification provides support for the following location technologies:

  • Cell-Identification (CID)
  • Enhanced Cell-Identification (E-CID)
  • Observed Time Difference of Arrival (OTDOA)
Cell Identification

Positioning based on Cell-Identification (Cell-ID or CID) is a network-based technique which can be used to estimate the position of an IoT device quickly, but with very low accuracy. It uses the geographical knowledge of a device’s serving cell to estimate its position. In the simplest case, the position is estimated using only the coordinates of its serving network eNodeB or cell with an uncertainty area in the size of the cell coverage. That said, the size of a cell can be very large (up to 35 km), hence the resulting very low accuracy.

Figure: Cell-ID

Roaming

Roaming refers to a service provided by a mobile network operator that allows its customer’s 3GPPTM-capable IoT devices to connect to an alternative cellular network beyond the range of the "home" network. In that way the devices are able to send and receive data, as well as access other services through the "visited" network as if they were using their home operator’s network. This helps ensuring a seamless coverage and service continuity for the customers especially in applications of high mobility or global deployment. Roaming among the different operators is only possible when the corresponding roaming agreements that conclude the legal aspects of mutual network usage were signed between the parties. If there is no roaming agreement in place, the customer SIM will be rejected by the visited network. The guidelines on roaming agreements are defined by GSMA for its members and may differ depending on radio access technology (RAT) type.

Different types of roaming can be differentiated, among others:

  • National roaming – refers to the ability to use a network of another operator within the same country. It is mainly used by operators to gain or extend the coverage for the particular geographical area without the need of deploying its own infrastructure.
  • International roaming – refers to the ability to use a network of a foreign operator outside the country. It is mainly used to ensure the coverage and service consistency internationally not having to expand the network in other markets.

Roaming is particularly important to manufacturers deploying Mobile IoT devices on a global basis and looking to benefit from economies of scale. It is also critical for use cases such as logistics tracking, which may involve containers crossing numerous international borders on a single trip, as well as for devices that may be manufactured in one country but deployed in another, such as smart meters.

Not only lack of commercial agreements of mobile operators may create limitations in realization of some use cases. It can be restricted by the technological disparity in network implementation, as per country’s use of different frequencies, network configuration, implementation of certain features (e.g PSM or eDRX), or support of particular services like voice or SMS. Such aspects must be considered when designing an IoT application and choosing a suitable hardware for the use case.

RAT-specific Configurations

With the advent of multimode modules supporting both NarrowBand IoT and LTE Category-M, it has become possible to build Mobile IoT applications which can roam across a global footprint and switching between either technology, based on local coverage availability or the necessity to use the higher-bandwidth connection of LTE-M to handle larger payloads, e.g., for firmware updates. With this paradigm shift to multimode operation, it becomes necessary to plan IoT Device applications in a way that account for the differences between NB-IoT and LTE-M.

Narrowband IoT and LTE differ in numerous aspects, but the key elements that must always be considered revolve around Coverage Enhancement Level support and the available Power Saving Features:

  • IoT application developers need to take into account that there are differences in the way that coverage performance is implemented in these two technologies. Whereas NB-IoT support three Coverage Enhancement Levels (CE-Level 0, CE-Level 1, CE-Level 2), LTE-M instead uses two Coverage Enhancement Modes (CE-Mode A, CE-Mode B) - each of which consists of two Coverage Enhancement Levels (CE-Mode A: CE-Level 0, CE-Level 1; CE-Mode B: CE-Level 2, CE-Level 3). Due to the lack of CE-Mode B availability worldwide, LTE-M networks therefore currently support CE-Level 1 and CE-Level 2 only. LTE-M effectively does not provide the same level of deep indoor penetration with CE-Mode A that NB-IoT does with its CE-Level 2. The boundaries between these CE-Levels and Modes is also different, as threshold values are not harmonized across technologies. For this reason, it is necessary to model the performance of multimode Mobile IoT solutions by specifying different distributions of devices across NB-IoT and LTE-M CE-Levels.
  • For similar reasons, power saving functionalities must also be configured differently on multimode devices for LTE-M and NB-IoT. Most operator network around the world currently do not support Early Release Indication (also known as "Release Assistance Indication") on LTE-M, as it is a 3GPPTM Release 14 feature. Whereas some features may be activated for LTE-M - take Connected Mode DRX for instance, these may be completely missing on the NB-IoT network due to the limited feature capabilities of the infrastructure supplier used by the mobile network operator.
User Equipment Power Classes

Currently deployed NB-IoT and LTE-M networks are designed with the assumption in mind that devices operate in either of two power classes of 3GPPTM Release 13 – Power Class 3 (which can transmit with a power level of 23dBm) and Power Class 5 (20dBm). Reducing power transmission by 50% can lead to a reduction in coverage when using Power Class 5. The Signal-to-Noise Ratio (SNR) detected at the serving eNodeB (network cell) becomes lower; therefore, the device will need to perform more repetitions to enhance coverage. Power Class 5 devices may be out of coverage in places where Power Class 3 devices can still connect to the network in Coverage Enhancement Level 2 (CE-Level 2).

3GPPTM Release 14 introduced a further improvement in power and coverage performance with its Power Class 6. This reduces the UE’s maximum output power further to 14 dBm. Although the Uplink coverage is reduced by 9 dB, with a corresponding 155 dB Maximum Coupling Loss (MCL) vs. Power Class 3’s 164 dB MCL, it has the positive effect of reducing battery consumption on the device. Release 14 compensates the reduced output power by increasing transmission time, maintaining the same energy per bit as seen in Release 13. The specification also defines the mechanism permitting the serving eNodeB to detect the device power class during the connection establishment. This new power class ushers in the possibility to use small coin-cell batteries, which are ideal for limited-size devices and applications, such as wearables.

Multimode Operation

"Multimode" operation is the configuration of a wireless communication module or chipset so that it actively supports more than one LPWA technology in the same SKU, bringing the flexibility to deploy devices across multiple mobile networks with the same product variant, together with a seamless international coverage wherever one of the supported radio access technologies (RAT) are not locally available. Modules and chipset that offer multimode coverage support dynamic reselection of RAT types to either LTE-M, NB-IoT or EGPRS as a preferred connection. The procedure to select a less preferred RAT is often referred to as a "Fallback" mechanism. Once the preferred RAT is available again, the multimode module may reselect it to resume operations on the RAT of preference.

Multimode modules are also beneficial when specific operations can be more efficiently carried out if using a different RAT than the preferred one for normal procedures. Take firmware updates, for instance. Due to their large sizes, firmware updates may be transmitted more efficiently using LTE-M rather than NB-IoT; however, for standard, hourly transmission of small sensor readings, NB-IoT would be more suitable. The application logic can be designed such that standard sensor measurements are transmitted over NB-IoT, whereas each bi-annual FOTA update is sent via a time-limited connection over LTE-M. Wherever LPWA coverage is not available, 2G Fallback can be used to guarantee coverage.

New RAT types are selected based on the pre-configured network setting on devices. Alternatively, the new RAT can be selected dynamically by requesting it from the network, or when a network connection drops and a new network is acquired. Finally, multimode modules and chipsets typically allow for the configuration of higher priority RAT and network scanning: As there is no specific standard, generally the RAT setting and network scan settings can be pre-configured by applications by using vendor proprietary AT commands.

Singlemode Operation

Singlemode chipsets and modules support only one radio access technology (RAT), for instance, NB-IoT or LTE-M. As such, they cannot be used on mobile networks where the supported RAT is not implemented.

The advantage of using a singlemode chipsets and module are their optimized power consumption, smaller size, lower cost, and ease of use. Typically IoT uses cases which are static in nature (neither moving nor roaming) can be conveniently implemented with singlemode modems. Although singlemode module usually do not need any pre-configuration, they can programmed via AT commands to search only specific frequency bands in a particular geographic region in order to improve their network acquisition times.

2G Fallback

2G fallback is a feature supported by a subset of multimode Mobile IoT modules, supporting NB-IoT and/or LTE-M in additional to EGPRS. The switch of radio access technology (RAT) transpires when either NB-IoT and/or LTE-M coverage is not available, or their received signal strength is very low.

With regards to mobility, NB-IoT is a separate RAT from E-UTRAN. As per 3GPPTM specification 36.304 and 36.331, inter-RAT cell selection and reselection in Idle and connected Modes is not supported, including handover and measurement reporting. In short, NB-IoT does not have network-controlled mobility between UTRAN, E-UTRAN, GERAN, and CDMA2000.

The module's behavior can be configured via AT commands generally. Proprietary AT commands provided by module vendors allow for the customization of the RAT scan sequence, RAT scan mode, RAT band configuration, etc. When roaming, the module will also automatically search for a new network; the behavior in this scenario can also be affected via AT commands and SIM card properties.

2G fallbacks may occur during the following scenarios:

  • While selecting a new RAT: When explicitly requested, or when a network connection drops and a new network is acquired
  • Higher-priority RAT: Device will start scanning the RAT sequence from high to low priority, as preconfigured in the device; the priority order must be carefully implemented to avoid long scanning times
  • Specified scanning order during boot-up: In accordance to the preconfigured RAT scan sequence and priority order set in non-volatile memory via AT commands (automatic , LTE-M > NB-IoT > 2G , NB-IoT > LTE-M > 2G, 2G > NB-IoT > LTE-M, etc.)
  • Roaming: In 2G fallback, the module will do a complete fresh 2G scan as per the pre-configuration of the device (controlled via AT commands) and SIM (OPLMNwACT & EHPLMN list) to find a 2G RAT and associated radio frequency bands to establish a network connection
Critical IoT

Critical IoT is a term that refers to use-cases, applications, and devices for Internet of Things where high availability, ultra reliability, as well as low latency are critical. Such high-end applications are characterised by the transfer of data at high rates, usually in the milliseconds, as well as high mobility, and use of services like SMS or voice. They require maximum performance; hence it’s essential to base them on robust and stable technologies which ensure an availability of service close to 100%.

Smart vehicales, video surveillance, smart grid, and remote device control are only a few examples of IoT applications applicable to Critical IoT use cases. Critical IoT applications are currently supported via LTE technology and will further be enabled by the upcoming introduction of 5G networking. This will enable newer, even more complex communication scenarios for both consumer and business sectors.

The need for high availability and frequent operations impose higher requirements on the device side, especially with regards to its battery life. The battery longetivity of a typical Critical IoT use case, when compared to Massive IoT, is relatively short. Although 3GPPTM cellular technology would be the most suitable for realization of Critical IoT use cases over wide area networks, secured Wi-Fi networks may be a good option for local area networks.

Figure: Critical IoT vs. Massive IoT

Maximum Coupling Loss (MCL)

Link budgets are an accounting of all power gains and losses experienced by communication signal in a certain telecommunication system. This includes all gains and losses experienced at a transmitter, propagation through a medium (free space), as well as all power gains and losses at the receiver.

Traditionally, network planning has used "Maximum Allowable Path Loss," MAPL for short, to define the geographic region in proximity to the transmitter where reception and demodulation of its transmitted signal was still possible by a specific receiver. The MAPL is the difference between the transmitting end's total radiated power (TRP) and the receiving end's total isotropic sensitivity (TIS), in decibels (dB). When propagating signals experience less attenuation than the MAPL, the receiver is still able to detect them. As soon as a path loss greater to the MAPL, the receiving end can no longer detect and demodulate the transmitted information. MAPL can be described as such:

MAPL (dB) = P Tx - (Noise Figure + SINR + Noise Floor)

+ Antenna Gain Tx + Antenna Gain Rx

That said, in the case of NB-IoT and LTE-M, most low-cost IoT devices do not integrate optimized antennas with high gains. This means that the ability of devices to close the link budget with these technologies is highly dependent on the implementation of individual device antennas. To normalize the terminal-related aspects, the standardization bodies opted to use Maximum Coupling Loss (MCL), which is the maximum propagation loss possible in the conducted power level (= without antenna gain) which the radio link can tolerate and still demodulate a signal at the receiving end. MCL has therefore established itself as an industry term referring to the coverage performance of Mobile IoT radio access technologies from a link budget perspective. MCL essentially the difference in power (in dB) between the conducted transmit power (in dBm) minus the receiver sensitivity (in dBm). Another way to express this is given by the following formula:

MAPL (dB) = MCL + Antenna Gain Tx + Antenna Gain Rx

In the figures below, standard link budgets for NB-IoT are graphically depicted. Please note that the MAPL in the Downlink and Uplink are approximately the same. The secures link balancing, i.e. that the user equipment's "receiving" footprint is approximate to the serving cell's (eNodeB) "receiving" footprint. As most 3GPPTM procedures are bi-directional, a balanced link is critical to ensure an optimized network operation with minimal access failures, packet loss, and power efficiency.

Figure: NB-IoT Link Budget to "Close" the Downlink

Figure: NB-IoT Link Budget to "Close" the Uplink

Service Enablement OMA Lightweight M2M

Lightweight M2M (LwM2M) protocol is an open-industry messaging protocol defined by the Open Mobile Alliance (OMA) for device management of in-field, fixed or mobile M2M/IoT devices, currently in its version 1.1 release. As the successor to the OMA Device Management (OMA-DM) specification, this REST-based protocol is used to communicate between the LwM2M client software (on an IoT device) and LwM2M server software (on an IoT management platform). LwM2M's simple resource model includes a set of Objects and corresponding Resources that can be created, updated, deleted, and retrieved asynchronously. It covers device management actions - such as bootstrapping, device configuration, firmware updates, fault management/diagnostics, and connection management, as well as data plane actuations - control, data reporting, and lock and wipe, for instance. Diagnostics include the ability to query the battery level of a device, consult the device's settings, assess the memory status, identify the firmware, software and hardware version of the device and its components, as well as identify its capabilities. Connection Management entails the management of basic parameters for the device's cellular connectivity on the operator network, such as APN, WEP keys, bearer selection, etc. The control functionalities of LwM2M include the ability to set-up access control on LwM2M Objects for various servers, reboot the device, disable the device for a specific amount of time, or force devices to perform a registration procedure.

The LwM2M standard was created as a response to the demands of the expanding M2M/IoT market to overcome certain limitations, such as cost-effective remote management of battery-powered, constrained devices (<20kB RAM), cross-standard interoperability, and security issues. The client and server communication is done over CoAP. Furthermore, IPSO V1.0-defined LwM2M Objects may sit atop, providing data models for different industries - connected car, smart cities, transport, and industry, among others.

LwM2M is commonly used in Mobile IoT use cases for Device Management and Service Enablement, as it supports numerous key aspects that are critical for M2M/IoT devices having low-power microcontrollers and small amounts of Flash and RAM, transmitting over 3GPPTM networks requiring efficient bandwidth usage:

  • Small client footprint
  • Transport Layer-agnostic, supporting Non-IP, TCP/IP, UDP/IP, and SMS
  • CoAP bandwidth optimization ensures improved bandwidth efficiency
  • DTLS-based security based on CoAP (IETF)
  • Develop toolkits for app development
  • Queue mode functionality informing the server that a device will be disconnected for a specific timeframe

The following four logical interfaces are defined on both LwM2M server and client:

  • Bootstrap: This interface allows the server to manage the keying, access control and configuration of a device to enroll with a server.
  • Discovery and Registration: This enables a client to let the server know its exists and registers its capability.
  • Device Management and Service Enablement: With this interface, the server can perform device management and service enablement tasks by sending operations to the client and to processing corresponding responses from the client.
  • Information Reporting: The client can report resource information to the server via this interface. This may be triggered either periodically or by events.

Figure: LwM2M Interfaces and Flows

Although CoAP may be used over MQTT/TLS/TCP, the IoT Solution Optimizer does not offer this option due it is unsuitability for Mobile IoT use cases.

Firmware over the Air (FOTA)

Firmware Over-the-Air (FOTA) refers to the wireless download from a server of an operating firmware package for wireless communication modules. This is different from the downloaded updates of MCU-flashed application code running on an IoT device, referred to as "Software Over-the-Air" (SOTA). Once received, the FOTA client installs the new code. With this technology the code on an embedded device that contains new features, bug fixes, or security patches, can be conveniently and reliably updated without having to physically connect a device to any other source.

FOTA is a fast and a cost-efficient method for dealing with the mass of devices in the field and ensures more security in the Internet of Things. One of the common FOTA implementations has been defined in the architecture for Mobile Device Management (MDM) systems, standardized by OMA. Four states are defined: Idle (State=0), Downloading (State=1), Downloaded (State=2), and Updating (State=3). FOTA as it is defined by OMA includes the following features:

  • Definition of a "Write to Package URI" for downloading the firmware update;
  • An integrity check to confirm that the downloaded package is not corrupt;
  • Executable resource updating;
  • Confirmation if the firmware update was successful or failed.
Device Management

In the broadest sense, the term "device management" (DM) refers to tools, processes or capabilities that allow managing different types of devices or machines. In context of IoT, it usually refers to a software or an application that provides functionalities to remotely administer, maintain and operate devices. It provides an immense simplification of handling especially large volumes of devices, in which most of operations can be automated, and on-site activities can be reduced to minimum. Device management therefore has become one of the key aspects of deploying successful IoT applications. The main functionalties of a device management software can be clustered in the following key categories:

  • Deployment and authentication – Includes the process of onboarding of devices into the application or software, allowing only the verified hardware to enroll;
  • Configuration and control – Essential for adjusting the settings or the behavior of devices to specific conditions which may be critical for their functioning or performance;
  • Monitoring and diagnostics – Helps in the detection or determination of unforeseen operational issues that may impact device’s performace;
  • Software/firmware management and maintenance – Play an important role in maintaining device health and resolving device’s software issues such as bugs or security faults, as it allows a remote mass-update of the assets.

The rapid growth of Mobile IoT creates the need for even more sophisticated device management functionalities that enable the more efficient control and management of multiple types of devices. Driven by the search for further efficiency and simplification in IoT applications, a new set of standards have emerged with specifications addressing the complexity and variety of device management deployments. These include the 3GPPTM Release 13 Service Capability Exposure Function (SCEF), as well as the Open Mobile Alliance's (OMA) Lightweight Machine to Machine (LwM2M). The latter uses efficient header and payload encoding, blockwise transfer fragmentation handling, built-in congestion control, and utilization of the more power-efficient CoAP protocol over Non-IP or UDP/IP, to make it ideal for NarrowBand IoT and LTE-M deployments.

LwM2M Objects

LwM2M resource model defines each piece of information made available by the LwM2M Client as a "Resource." These are further logically organized and grouped into Objects. The Firmware Update Object, for example contains all Resources used for firmware updating purposes. The body OMASpecWorks is responsible define these optional Objects, which extend the capabilities of the LwM2M standard. Objects sit atop the LwM2M protocol, and just below the application itself. For a full list of defined Objects, please refer to the: LwM2M Registry. A partial list of available LwM2M Objects is presented below:

  • Power Control
  • Light Control
  • Accelerometer
  • Magnetometer
  • Barometer
  • Altitude
  • Load
  • Pressure
  • Loudness
  • Gyrometer
  • Addressable Text Display
  • Multiple Axis Joystick
  • Multi-state Selector
  • Dimmer
  • powerupLog
  • radioLinkFailureEvent
  • cellBlacklistEvent
  • NeighborCellMeasurements
  • ServingCellMeasurements
  • PagingDRX
  • txPowerBackOffEvent
  • SipRegistrationEvent
  • sipSubscriptionEvent
  • VolteCallEvent
  • volteCallStateChangeEvent
  • LwM2M Security
  • LwM2M Server
  • LwM2M Access Control
  • Device
  • Connectivity Monitoring
  • Firmware Update
  • Location
  • Connectivity Statistics
  • Cellular Connectivity
  • APN Connection Profile
  • WLAN Connectivity
  • Bearer Selection
  • Lock and Wipe
  • DevCapMgmt
  • Portfolio
  • LwM2M Software Management
  • LwM2M Software Component
  • BinaryAppDataContainer
  • Event Log
  • Communication Characteristics (LwM2M ver. 1.1)
  • Non-Access Stratum (NAS) Configuration (LwM2M ver. 1.1)
  • LwM2M OSCORE (LwM2M ver. 1.1)
IoT Application Reporting Events

The IoT Solution Optimizer currently models IoT application communication between the server and IoT device client as an “average reporting event.” This transaction may represent a regular payload exchange that is either device-initiated (uplink payload is initially sent to the server after a random access procedure and set-up of an RRC bearer) or server-initiated (downlink payload is sent to the device after a successful paging of the device and set-up of an RRC bearer). Naturally, handshake exchanges of messages are also possible, e.g. and uplink message is closely followed by a downlink and/or uplink messages, or vice-versa. Each transmission to/from the IoT device cannot exceed the maximum transmission unit (MTU) size. For most wireless communication modules, this is either 512 Bytes. Larger messages by broken up into multiple transactions, each maximally 512 Bytes in size.

Figure: Example Application Reporting Event

IoT applications may require different types of messages to be scheduled and exchanged. Consider, for instance, an application that sends hourly temperature reports, weekly maintenance logs, and yearly firmware updates. In order to properly model such complex communication patterns, the IoT Solution Optimizer will soon implement capabilities allowing such scheduled interactions to be properly configured and considered in a performance analysis. If you have specific needs which are currently not supported by our service, please contact Tool Support to provide feedback which may be considered to develop this service further. Our goal is to address your design and optimization needs as best as possible.

Transport Layer Which transport protocol should I use?

The choice of transport protocol will define which messaging or management protocol can be used on top:

  • Non-IP transport, or Non-IP Data Delivery (NIDD), can be used to send the application payload either directly, or wrapped in CoAP or MQTT-SN overhead. These are the most efficient transport techniques available for bandwidth- and power-optimized Mobile IoT communication.
  • UDP over IPv4/IPv6 can support MQTT-SN (Publish/Subscribe) or CoAP (REST) protocol.
  • TCP over IPv4/IPv6 uses the MQTT (Publish/Subscribe) or very heavy HTTP (REST) protocols.

At the end, the choice of transport mechanism impacts the number of IoT device-originated messages needed to perform a handshake with the server, the supported feature capabilities (including optional headers), as well as the interoperability with cloud connectors.

Figure: Transport Protocol Options

Transmission Control Protocol (TCP)

The Transmission Control Protocol (TCP) is a connection-oriented, end-to-end reliable protocol designed to fit into a layered hierarchy of protocols which support multi-network applications. TCP implements the OSI model's Transport Layer and typically uses IP (IPv4 or IPv6) as the protocol bearer for communication over the Internet. That said, it can also be used with other lower-layer protocols. TCP requires only a basic datagram service from its lower-layer protocol and makes little assumption as to the reliability of the lower layer. It takes care of sequence-numbering of data segments and repetition of missed segments to provide the data in the correct order and without missing segments.

Figure: TCP Protocol Header

Within a TCP connection, data can be transferred in both directions between the connected network nodes. To provide this service, TCP must add a relatively large amount of overhead to the transferred data, e.g. for acknowledgement of successfully transferred segments, repetition of lost segments and connection management. Both the set-up and tear-down of a TCP connection requires an exchange of at least three packets, each. Because of the need for the sender of data to repeat missing segments, these segments must be stored until they are acknowledged, thereby requiring a relatively large amount of memory on the side of the sender. As such, TCP is most suitable for the transfer of large amounts of data with no real-time requirements. It is less suited for low cost devices requiring low energy consumption. Usually, TCP/IP is used in conjunction with MQTT or HTTP protocols in M2M or IoT services.

Figure: TCP Transmissions with(out) Keep-alive Messages

Another important feature of TCP is that the sessions are prone to be closed by Mobile Network Operator firewall timers. To prevent this, a "keep-alive" message must be periodically sent. This results in three possible scenarios for TCP transmissions over 3GPPTM mobile networks, as shown above:

  • (A) TCP session closed - The session will need to be renegotiated with the next session-opening (TCP handshake);
  • (B) TCP session left open without "keep alive" probe packets - Session set-up does not need to be renegotiated before the next application message;
  • (C) TCP session left open with "Keep Alive" probe packets - Session set-up does not need to be renegotiated before the next application message.

Figure: Comparison of Transport Protocols - TCP


User Datagram Protocol (UDP)

The User Datagram Protocol (UDP) is a connection-less transport-layer protocol using IP (IPv4 or IPv6) as the underlaying protocol for communication over the Internet. UDP provides for addressing and the basic transfer of datagrams over IP networks. In contrast to TCP, UDP provides no protection against loss of data segments or segments arriving in an incorrect order. UDP is therefore best suited for applications where protection against loss of data is implemented in higher layers, or not required at all.

Figure: UDP Protocol Header

Due to the less complex nature of UDP as compared to TCP, it can be implemented on simpler hardware. Given its reduced overhead, UDP lends itself to applications mandating low energy consumption and data efficiency, such as Mobile IoT use cases. The UDP transport protocol is generally used in conjunction with MQTT-SN or CoAP in the case of M2M and IoT services. Additional key features of UDP:

  • Data integrity provided via checksums
  • Supports different functions at the source and destination using port numbers
  • Exposed to unreliability of underlying network due to no handshake dialogues
  • Lack of retransmission delays makes it suitable for real-time applications

Figure: Comparisons of Transport Protocols - UDP

Maximum Transmission Unit (MTU)

A key aspect to consider in the dimensioning of application message is the Maximum Transport Unit (MTU) size of the Uplink TCP/UDP payload; this is represented in the figure below as "UDP Payload." The Downlink TCP/UDP payload is not affeted by this limit. The MTU controls how large the payload and protocol can be which sits on top of the TCP or UDP Transport bearer. This "TCP payload" or "UDP payload" includes the application payload, the messaging/management protocol and the MAC TAG from TLS/DTLS/HTTPS transport encryption, if used.

Figure: Maximum MTU Size

There are different maximum TCP/UDP payload sizes supported per chipset/module based on the underlying transport/IP protocol that is used (TCP/IPv4, TCP/IPv6, UDP/IPv4, UDP/IPv6, Non-IP). Based on the selection chipset / module, and selected transport/ Internet protocol, users will be limited in how large their individual TCP/UDP payloads can be.

Figure: Uplink MTU Sizes, Select Modules

TCP "Keep Alive" Messages

Transport Control Protocol (TCP) allows for a session-based communication. Sessions must be started between peers, maintained, and eventually terminated. As network NAT proxy or firewalls do not allow inactive data sessions to remain active over longer periods of time, they may teardown the TCP pipe in-between without either of the peers being aware. This unfortunate situation may even lead to exception scenarios where improperly-designed IoT devices are unable to recover the data communication, or a device reboot is required.

TCP "keep alive" messages are available to prevent the premature tear-down of an active TCP session. When setting up a TCP connection, timers related to the "keep alive" procedure are negotiated. When the respective timer reaches zero, the TCP client sends its peer a "keep alive" probe packet with no payload data within it; TCP permits the handling of zero-length data packets. Additionally, the acknowledgement ACK flag is also enabled in the probe packet. The remote host will also reply with no data with an ACK set. Once the reply to the "keep alive" probe is received back at the client, it has essentially confirmed that the connection-based data pipe is still available for application traffic to the host. The same "keep alive" probe resets the network-side firewall timer to avoid premature disconnection. Depending on the network and/or remote host endpoint configuration, "keep alive" probe packets may need to be sent every few minutes. This has a considerable impact on the battery life of the IoT device; hence, it is strongly discouraged to use this feature for Mobile IoT technologies such as NB-IoT and LTE-M, unless it is absolutely necessary. Connectionless protocols such as CoAP and MQTT-SN do not require TCP "keep alive" messages.

Figure: "Keep Alive" Probe Packets include Acknowledgement

Figure: TCP "Keep Alive" Probe Packets are sent periodically

The TCP "keep alive" procedure can be used to identify if peers have lost their connection (e.g. by rebooting or moving out of coverage), advise one end when the peer dies (e.g. by drained battery, tampering, or malfunction), or when the network connection in-between goes down. As stated earlier, "keep alive" can also be used to prevent disconnection of the channel due to inactivity. This is caused by the connection tracking procedures implemented in proxies and firewalls, which track all connections passing through them. Because they are only able to keep a limited number of connections in their memory, older, inactive connections are regularly purged. Periodically sending "keep alive" probe packets over the network is a simple method to always be in an optimal position in the firwall's queue, with limited risk of sesson deletion. Operator networks usually have NAT proxy timers of 27 to 30 minutes. Some public clouds require regular "pings" to their connectors; Microsoft Azure IoT Hub, for instance, recommends an "keep alive" probe every 15 minutes. This has a negative impact on device performance due to excessive battery drain, just for maintaining the active date session between both end-points open.

IP Layer Non-IP Data Delivery (NIDD)

There are many benefits to transporting your IoT application data over Non-IP. This capability, known as Non-IP Data Delivery (NIDD) is supported by certain NarrowBand IoT networks, wireless communication chipsets and modules which are commercially available. With NIDD it is possible to improve communication efficiency and reduce operative costs, as little to no protocol overhead needs to be sent over the air:

  • Customers can avoid being charged for TCP/IP or UDP/IP headers, which can range between 30-40 Bytes per uplink and downlink message.
  • In the case of TCP/IP, there are also additional messages required for the SYN (synchronization/set-up message), SYN Acknowledgement, FIN (tear-down message), FIN Acknowledgement. From the viewpoint of the IoT application, this overhead is a waste of resources. With NIDD, IoT applications on devices can simply append MQTT-SN or CoAP protocol padding to their application messages and transmit them over the NAS bearer (Control Plane) to the network. This is not an option in case that an enterprise needs to use MQTT or HTTP, both of which require TCP/IP.
  • Due to less messages and smaller payloads sent, an appreciable improvement to device performance and battery life is observed, the effect of having a higher efficiency of payload to overhead.

Figure: Non-IP Data Delivery vs. IP-based Data Delivery

Do you want to learn how Non-IP works on some mobile operator networks?

How does Non-IP Data Delivery work?

Non-IP Data Delivery (NIDD) is often used by enterprises wanting to optimize their payloads and IoT application efficiency. IApplication payload can be directly sent over the Non-IP bearer, or encapsulated in MQTT-SN or CoAP. In the case that the application data wrapped in HTTP and MQTT, it is necessary to use the TCP/IP protocol underneath, which thus precludes the usage of NIDD in your M2M or IoT solution. Mobile network operators can implement NIDD in two different ways:

  • A common approach to enable Non-IP is to implement a Service Capability Exposure Function (SCEF), which can map Non-IP payload sent over the air to/from IP-based traffic on the Internet. This network node was defined for the 3GPP™ core network architecture, within Release 13. It provides means to securely expose via REST APIs those services and capabilities provided by operator network to external application servers. Apart from supporting the NIDD functionality, it enables the usage of External IDs (e.g. name-of-device@domain.com), APIs and AAA, device trigger requests, etc.
  • Alternatively, operators may implement IMSI to IP address mapping within its core network. NB-IoT enterprises obtain a Private APN with IMSI range mapped to IP address range. NIDD traffic received at the core network from any specific IoT device (identified via its SIM card IMSI) within that range is assigned the corresponding IP-Address by the APN. Additionally, an APN with single target IP address is configured, to which the application data can be sent to. Usage of a Private APN has the on-top benefit of bringing additional security to the solution. The figures below illustrate how IMSI to IP address mapping is handled on mobile operator networks.

Figure: How Non-IP Data Delivery Works (Uplink)

Figure: How Non-IP Data Delivery Works (Downlink)

Internet Protocol Version 6 (IPv6)

The Internet Protocol Version 6 (IPv6) is the designated successor IPv4, the protocol that forms the technical foundation of the Internet of today. As a network layer protocol, it provides addressing and routing of packages in, and between, IP networks as a service to higher (transport layer) protocols, such as TCP and UDP.

With a 128-bit address length, it removes the main limitation of IPv4 by increasing the address space from approximately 4.3 billion in IPv4 to around 3.4 x 1038 addressable hosts. IPv6 will gradually replace IPv4 over the coming decades because the IPv4 address space is already insufficient tp address the exponentially growing demand of today's Internet traffic - requiring mitigating measures like Network Address Translation (NAT), let alone that of the developing Internet of Things. 

Similar to IPv4, IPv6 requires an implementation of the lower and higher layers of the OSI model, including a transport protocol (TCP or UDP). This can render IP-based data transfer less suitable for applications requiring low cost devices or low energy consumption in devices. For NarrowBand IoT devices, Non-IP Data Delivery (NIDD) can be an option with lower energy consumption and complexity than IP-based protocols.

Figure: Comparison of Protocols for IoT

Internet Protocol Version 4 (IPv4)

The Internet Protocol Version 4 (IPv4) is the network layer protocol forming the technical foundation of the Internet of today. It provides addressing and routing of packages in, and between, IP networks as a service to higher (transport layer) protocols, such as TCP and UDP.

Figure: IPv4 20-Byte Header

IPv4 was introduced in 1981, before the Internet became the household commodity it is today. It operates with an address length of 32 Bits, which allows addressing of 232, or ca. 4.3 billion hosts. Due to inefficiencies in address assignment, far less hosts can addressed in practice. Although mitigating measures, such as Network Address Translation (NAT), have been put in place over the past decades, IPv4 is no longer able to support the growing Internet with a sufficient number of addresses. IPv4 will therefore be gradually replaced by IPv6 over the coming decades. IPv6 offers a far larger address space of 2128 addresses, among other optimizations.

Using IPv4 on a device requires an implementation of the lower and higher layers of the OSI model, including a transport protocol like UDP or TCP. This can render IP-based data transfers less suitable for applications requiring low cost devices or low energy consumption in devices intended to operate over many years on a battery. For NarrowBand IoT devices, Non-IP Data Delivery (NIDD) can be an option with lower energy consumption and complexity than IP-based protocols.

Figure: Comparisons of Protocols - IPv4


Dual Stack Internet Protocol (IPv4v6)

3GPPTM specifies three types of PDP/PDN to describe connections: PDP/PDN Type IPv4, PDP/PDN Type IPv6, and PDP/PDN Type IPv4v6. Each time the wireless communication module establishes a data session, a particular PDP/PDN Type is requested from the supported types communicated by the network during the Attach sequence. This request triggers the MME to initiate a PDP/PDN request via the Serving Gateway (SGW) to Packet Data Network Gateway (PGW) modulo subscription grants.

PDP/PDN Type IPv4v6 was introduced with LTE's Evolved Packet System (EPS) core network architecture. It allows the IoT device to get an IPv4 address and an IPv6 prefix within a single PDP/PDN bearer. This option is stored in the HSS as a part of the IoT device SIM's subscription data. Although mobile network operator 4G infrastructure should have no issues in handling this PDN Type, the level of support varies in GSM/UMTS networks depends on the Serving-GPRS Serving Node (SGSN) release firmware. In order to avoid Attach problems at a visited SGSN, operator roaming agreement supporting IPv6 must account for IPv4v6 and ensure that the implementations align with GSMA requirements.

Should I use Internet Protocol?

Users may send their data between the IoT device and the mobile network using Internet Protocol (IP). As the MNO's system is a private network, it is actually technically possible to avoid using IP to send datagrams between endpoints. This type of Non-IP transport, also known as Non-IP Data Delivery (NIDD), may be available for specific technologies that benefit from its use - such as NB-IoT and LTE-M. Please note that this option is only available if it is supported by both the chipset protocol stack in the device's wireless communication module, and the mobile network.

Less critical for the decision of whether to use IP or NIDD is the overhead protocol that is intended to be used. CoAP and MQTT-SN are the most efficient messaging/management protocols available for bandwidth- and power-optimized, constrained Mobile IoT devices. They may be sent over a NIDD bearer between the IoT device and the operator network. TCP-based protocols, such as MQTT and HTTP, as well as and UDP transport require to be sent over IPv4/IPv6. The intended protocol therefore influences the decision of whether to use Non-IP Data Delivery, or not.

Figure: Internet Protocol Options

Messaging / Management Layer Message Queuing Telemetry Transport (MQTT)

Message Queuing Telemetry Transport (MQTT) is a publish/subscribe messaging protocol and telemetry technology used to interconnect M2M and IoT applications in constrained environments. It is optimized for embedded systems with limited processing power and memory. It was invented in 1999, and the latest version 3.1.1 is an OASIS standard since 2014.

Summary of Key Qualities:

  • Lightweight and optimized for performance, battery and data-limited devices
  • Low-bandwidth, having on average 50 Bytes/ package when using TCP; even less with MQTT-SN
  • Reliable with QoS tagging, delivery can be ensured in unreliable networks
  • Open Source Protocol, i.e. royalty-free
  • Data type-agnostic messages can be without any particular format
  • Scalable, whereby one broker/server handles thousands of clients

MQTT uses a publishing/subscribing model, instead of a point-to-point approach. The data carried by the MQTT protocol across the network for the application has an associated Quality of Service and Topic Name. Clients are referred to as “publishers” and “subscribers” instead of senders and receivers:

  • Publishers and subscribers are loosely coupled via a message broker (= server)
  • Publishers do not need to know who or what is subscribing to their messages, and vice-versa
  • This simplicity allows each message to be very small in size, therefore reducing the demands on the network and on all remote monitoring devices from which MQTT messages originate
  • Packets of information sent across the network connection during a session are referred to as “control packets

Figure: Example MQTT Parking Sensor Application

As MQTT is transported over a TCP/IP bearer, there are three possible TCP transmission scenarios, as shown below:

  • (A) TCP session closed - The MQTT set-up will need to be renegotiated with the next session-opening (TCP handshake);
  • (B) TCP session left open without "Keep Alive" probe packets - MQTT set-up does not need to be renegotiated before the next application message;
  • (C) TCP session left open with "Keep Alive" probe packets - MQTT set-up does not need to be renegotiated before the next application message.

Figure: MQTT over TCP Transport Protocol

As MQTT is a transport protocol focusing on message transmission, IoT solution developers have the responsibility to implement appropriate security features separately. MQTT is used by numerous cloud-based platforms and services, including Amazon Web Services (AWS), Microsoft Azure, OpenStack, and Facebook Messenger. 

Figure: Comparison of IoT Protocols - MQTT

How do clients establish a connection to the MQTT broker?

1. The client sends Connect Command to broker, containing:

  • Unique Client ID
  • Keep Alive Time
  • Protocol information (MQTT version, etc.)
  • Flags (User Name, Password, QoSLevel, etc.)

2. The broker sends a CONNACK command back to the client in acknowledgement.

3. The connection is established.

How does the subscription process for topics work?

1. The SUBSCRIBE command can have one or more topics, each with maximum QoSlevels for server messages to the client. The command that the client sends to the broker contains:

  • Flags (QoSLevel, etc.)
  • Topic(s)

2. The broker sends SUBACK command to the client in acknowledgement.

3. The broker updates its subscriptions database, containing:

  • Client ID
  • QoSLevel for the requested subscription updates
  • Topic name (may include wildcards)

How does the client publish updates?

1. The client sends a PUBLISH command to the broker, containing:

  • Flags (QoSLevel, Retain, DUP, etc.)
  • Topic
  • Message (data value)

2. The broker sends a PUBLISH to every client subscribed to this topic.

3. The broker updates its message database if the retain flag is set with latest message for this topic. The next time that a new client subscribes to this topic it will receive the last retained messagedirectly.

MQTT for Sensor Networks (MQTT-SN)

Message Queuing Telemetry Transport - Sensor Networks (MQTT-SN) is based on the MQTT standard and makes the protocol more lightweight for use with sensor and actuator solutions. It is aimed at embedded devices using Non-IP Data Delivery (NIDD), UDP/IP or ZigBee networks.

Gateways are used as a bridge to translate between MQTT and MQTT-SN commands. An MQTT-SN gateway functionality can also be directly integrated into the broker, or be implemented as a separate entity. Clients can find these gateways automatically by using a DISCOVERY command; that is, clients do not need to be preconfigured with a gateway or broker address.

Multiple Gateways may be present at the same time within a single wireless network, for example, for load sharing or outage backup. Two possibilities exist for their implementation:

Transparent gateway:

  • A dedicated, E2E connections is established for each client
  • Easy to implement, however amount of concurrent connections might be limited

Aggregating gateway:

  • Only one connection is established between the gateway and broker
  • More complex to implement

Figure: MQTT-SN Session Protocol & Overhead

Unlike MQTT, the Topic Name is replaced with 16-bit TopicID integers to shorten the amount of transmitted data. These TopicIDs can be preconfigured for individual clients in a known network to avoid sending REGISTER commands; this can be done either from client to broker, or vice-versa. Short Topic Names (2 characters) can be used instead.

As clients can be in sleep state to save power, gateways, in turn, buffer messages for sleeping devices.

Finally, QoS Level -1 allows simple clients to send messages directly to known gateways, even without having the initial connection set-up, registration or subscription necessary.

Due to these enhancements, several mobile network operators promote the usage of MQTT-SN for NarrowBand IoT networks.

Figure: Comparison of IoT Protcols - MQTT-SN

How is the client registration of topic done, to receive a TopicID?

1. The client sends register command to Gateway, containing a Long Topic Name.

2. The gateway sends REGACK command back as acknowledgement, containing:

  • TopicID
  • ReturnCode: Accepted, or reject reason

3. The gateway updates internal database for this client, including:

  • Client ID
  • Long Topic Name
  • TopicID

How do clients discover a Gateway?

1. The gateway broadcasts the ADVERTISE command.

2. Clients searching with SEARCHGW command as broadcast with a radius measured in hops.

3. A gateway responds to the client with GWINFO command.

4. The client can then connect to the gateway.

How are clients assigned QoS Level -1?

(Preconditions: The gateway address is preconfigured in client)

The client sends a message with predefined TopicID, and does not care if either the gateway is reachable, or if message was received.

Compared to QoSLevel 0 with known gateway:

  • The client does not need to send Connect command
  • The gateway does not need to send an Acknowledge
  • The gateway has less Client information

How is the sleep procedure managed?

1. When the client connects to a gateway, the gateway saves client state as “active."

2. In the event that a client sends a Disconnect command with a sleep duration time (a Disconnect command without sleep duration is considered as a regular client disconnect), the gateway saves the client state as “asleep.”

3. When the client wakes up to send a message, the gateway saves the client state as “awake” (If no ping is sent after sleep duration, the state is lost).

4. The gateway thereafter sends all buffered pending messages to client. The gateway sends a ping response and client goes to in the state “asleep.”

Constrained Application Protocol (CoAP)

The Constrained Application Protocol (CoAP), as defined in RFC 7252, is a specialized Messaging Protocol that is optimized for use on memory- and power-constrained devices, such as M2M and IoT devices with 8-bit microcontrollers with small amounts of ROM and RAM. The integration of IoT services with existing web services is facilitated with CoAP, which interfaces server-side to HTTP via proxies. The protocol thus meets the needs of mass-deployed, resource-limited devices which must communicate with each other with a request/response interaction model across the LPWA network or Internet, with moderate QoS. This is different from MQTT, which uses a publish/subscribe model and offers higher QoS support. Apart from multicast support, CoAP supports smaller overhead, built-in discovery of services and resources, and Web features such as URIs and Internet media types. Architecturally, IoT devices install a CoAP server; the CoAP client is installed on the controller that manages multiple IoT devices.

Figure: CoAP Protocol Header

CoAP typically runs on devices supporting UDP/IP or Non-IP Data Delivery. Depending on the usage of option features, it typically adds 20-30 Bytes of overhead on top of the application payload. The specification has optimized CoAP to ensure:

  • Reduction of overhead and parsing complexity
  • URI and content-type support
  • Discovery of service resources
  • Subscription for resources, and resulting push notifications
  • Simple caching based on max-age

Figure: Comparison of IoT Protocols - CoAP

Which messaging protocol should I use?

When implementing an IoT application for Mobile IoT deployments, it is necessary to carefully select the protocol that will be used for messaging and management of deployed devices. Among the various options that exist, the following two are based on TCP transport protocol on top of IP-based Internet protocol:

  • HTTP is a REST-based protocol for synchronous communication, with considerable complexity and overhead that can range up to 200 Bytes; this protocol is not suitable for LPWA communication
  • MQTT is a publish/subscribe protocol with high QoS, that can range up to 35 Bytes

Likewise, the following two protocols may be used, based on either Non-IP Data Delivery or on UDP transport protocol on top of IP-based Internet protocol:

  • CoAP is a REST-based, request/response protocol with moderate QoS, that can range between 20-30 Bytes
  • MQTT-SN is a publish/subscribe protocol with high QoS, that occupies 10 Bytes

As seen in this simple comparison, the most efficient IoT protocols for NarrowBand IoT would be MQTT-SN or CoAP, preferably transmitted without UDP/IP encapsulation. This helps to reduce the power consumption at the IoT device to a minimum.

Figure: Messaging/Management Protocol Options

HyperText Transfer Protocol (HTTP)

The Hypertext Transfer Protocol (HTTP) is the REST-based application layer protocol most commonly used for the transfer of hypertext documents from servers on the World Wide Web to client computers, which can display the content in a web browser. Besides this application, HTTP is also commonly used in Machine-to-Machine (M2M) communication and for file transfers.

HTTP consists of a collection of request methods which provide and control the transfer of data between a server and a client in a synchronous way. The most prominent request methods are GET, which retrieves a hypertext document from a server, PUT and POST, both of which upload data from a client to a server. It was introduced in 1996 as HTTP/1.0 and since then has seen a number of updates and extensions improving efficiency (HTTP/1.1 and HTTP/2.0) and both security and privacy (HTTPS).

Because HTTP is a plain text protocol based on the TCP transport layer protocol, it creates a relatively large overhead compared to binary protocols and protocols based on UDP instead of TCP. This makes it less suitable for M2M or Internet of Things (IoT) type communication when low cost of devices and low energy consumption are prime concerns. The most prominent HTTP request methods are:

  • GET – Retrieves hypertext document from server
  • HEAD – Retrieves meta-information from server (without content)
  • PUT – Requests that enclosed entity is stored under a supplied URI
  • POST – Requests that enclosed entity is accepted as a new subordinate of the supplied, existing URI
  • DELETE – Requests deletion of the specified resource

Figure: HTTP over TCP/IP Transport

As MQTT is transported over a TCP/IP bearer, there are three possible TCP transmission scenarios, as shown above:

  • (A) TCP session closed - The MQTT set-up will need to be renegotiated with the next session-opening (TCP handshake);
  • (B) TCP session left open without "Keep Alive" probe packets - MQTT set-up does not need to be renegotiated before the next application message;
  • (C) TCP session left open with "Keep Alive" probe packets - MQTT set-up does not need to be renegotiated before the next application message.

Figure: Comparison of Protocols for IoT - HTTP

Connection Efficiency No Harm to Network

Mobile network operators takes service quality seriously. In order to secure the Service Level Agreements (SLAs) which we offer our customers, we must ensure that there is No Harm to Network (NHTN) from IoT Devices. The risks are quite high, and the consequences are well known: many things can happen - from unrecoverable devices out on the field, to signaling storms threatening the radio and core networks, back-end business and operations infrastructure and cloud. There are numerous reasons for this greatest of IoT challenges:

  • There are no standardized operating systems with limited API surface for IoT devices currently. This means that many applications can ignore mobile operator network requests for the IoT device to back-off. Network-side mechanisms cannot truly enforce how device applications communicate, or which connectivity bearer they request; 3GPPTM gives sufficient autonomy to IoT devices.
  • Multi-mode IoT devices may camp on a wrong Radio Access Technology (RAT) or may not proactively switch whenever the service quality or the use-case require it.
  • Thousands of suppliers may lack sufficient 3GPPTM knowledge to understand how to correctly handle network rejects, or properly implement technology features of the RAT protocol.
  • 3GPPTM was furthermore written with consumer devices in mind, typically having a user in their proximity, as well as a user interface to reset said devices. There are specific network reject causes used regularly by mobile network operators worldwide with consumer devices, which may lead an IoT device to be permanently stranded.
  • Unoptimized, Mobile IoT-unfriendly IoT device applications may disregard the specific characteristics of the NarrowBand IoT or LTE-M access bearer they are using, congesting the network and worsening service quality for other paying customers. There are typically limits on the how many messages devices should be originated each day by the IoT application, as well as the total monthly payload transmitted.
  • Inability to define and tag application messages' data priority and schedule data transmissions accordingly.
  • Last, but not least, it may not be possible to reach all IoT devices with pushed firmware updates, because of the fragmentation of device management support and capabilities (e.g. Firmware over the Air) across the supplier landscape.

With all of these aspects considered, it becomes necessary to implement a governance model for doing business. Mobile network operators approach the problem from several angles. They actively engage in the definition of the GSMA TS.34 "IoT Device Connection Efficiency Guidelines," the industry's only specification for NHTN. Features have been defined in this forum for industry implementation, namely, the Radio Policy Manager and Network Friendly Mode. Furthermore, they actively encourage the specification's adoption and observance among suppliers and the customer base. Whenever deployed volumes of a specific IoT device exceed a tariff-defined number of SIM cards, both the customer's IoT service platform and the IoT device application may be required to be compliant with GSMA TS.34. The number of SIM card units is set at the level where network-impacting devices may become statistically relevant in pockets of the operator's network. Improper implementation of the IoT application may cause network-side damage either locally or generically.

Radio Policy Manager (RPM)

Many mobile network operators require the implementation and activation of the Radio Policy Manager (RPM), a GSMA TS.34 feature which, when implemented within the 3GPPTM chipset, helps protect the network from the signaling overload caused by improperly-designed IoT applications. To-date, RPM is the sole industry solution deployed worldwide, which optimizes how IoT devices communicate. This chipset feature limits the number of mobility management (MM) and session management (SM) signaling events allowed per IoT device during a configurable time-window, for example:

  • Maximum number of chipset resets per hour
  • Maximum number of PDP Activation Requests per hour
  • Maximum number of successful PDP (de-)activation Requests per hour

As per several mobile network operators' No Harm to Network governance models, availability and activation of RPM is a requirement whenever the number of devices a customer wants to bring onto their networks exceeds a certain number of units. For this reason, there is close collaboration with radio chipset suppliers to integrate this feature into the chipset's protocol stack for NarrowBand IoT and LTE-M devices. RPM is defined in Chapter 8 of the TS.34 specification.

Key objectives can be summarized as such:

  • Protect the Network by performing “Connection Aggression Management” which is necessary when the device is aggressively trying to access the network following various NAS reject scenarios
  • Ensure the device can recover back into normal operating mode following a network failure/reject scenario

The following OEM modules already support the GSMA TS.34 Radio Policy Manager:

  • Advantech WISE-1570
  • Changhong (AI-Link) AI-NB25
  • Cheerzing ML5515
  • Cheerzing ML5535-G8
  • Fibocom N510-EAU-00
  • Fibocom N510-GL-20
  • Gemalto/Thales Cinterion® ELS81-E
  • Gemalto/Thales Cinterion® ENS22-E
  • Gemalto/Thales Cinterion® mPLS62-W
  • Gemalto/Thales Cinterion® mPLS8-E
  • Gemalto/Thales Cinterion® PLS62-W
  • Gemalto/Thales Cinterion® PLS8-E
  • Gosuncn ME3616
  • Lierda NB86-G
  • MobileTek L620
  • Murata Type 1RX
  • Murata Type 1SS
  • Notion MW29
  • Notion OT01-3
  • Notion OT01-5
  • Quectel BC66
  • Quectel BC66-NA
  • Quectel BC68
  • Quectel BC95-G
  • Quectel EC21-EC
  • Quectel EC21-EC Mini-PCIe
  • Quectel EC21-EU
  • Quectel EC21-EU Mini-PCIe
  • Quectel EC25-EC
  • Quectel EC25-EC Mini-PCIe
  • Quectel EC25-EU
  • Quectel EC25-EU Mini-PCIe
  • Quectel EG06-E
  • Quectel EG25-G
  • Quectel EG25-G Mini-PCIe
  • Quectel EM06-E
  • Quectel EP06-E
  • Ruijie RG-NB6120
  • Sercomm TPB23
  • SIMCom SIM7020E
  • SIMCom SIM7020G
  • SIMCom SIM7500E
  • SIMCom SIM7500E PCIe
  • SIMCom SIM7500E PCIe A
  • SIMCom SIM7600E
  • SIMCom SIM7600E PCIe
  • SIMCom SIM7600E PCIe A
  • SIMCom SIM7600E-H
  • SIMCom SIM7600E-H Mini-PCIe
  • Teltonika TNB260
  • u-blox MPCI L210
  • u-blox TOBY-L210
  • USR WH-NB71
Network Friendly Mode

Network Friendly Mode (NFM) is a non-standardized feature of the 3GPPTM communication module whose objectives include:

  • Optimization of the amount of times the communication module is allowed to register with the mobile network
  • Limiting and extending the periods between attempts to perform an IMSI attach, GPRS attach, PDP context activation or originate SMS messages. The aim is to reduce the amount of MSUs generated towards the HPLMNs HLR, SMSC, GGSN or PGW

Network Friendly Mode is composed of two features: a Start Timer and Back-Off Timer. The latter is activated by specific back-off triggers, and configured via a back-off iteration counter and back-off base intervals.

All functionalities of NFM were previously described in Chapter 7 of the GSMA TS.34 specification. In the latest releases of the specification, however, the NFM feature was completely deprecated. No additional industry support for NFM is currently planned as most operators prefer instead the implementation of the chipset-based Radio Policy Manager over the OEM module-based NFM feature. Furthermore, very few OEM module suppliers actually implemented this feature for the M2M / IoT business.

Mobile IoT Connection Efficiency

Mobile IoT technologies are highly optimized for maximum power efficiency. IoT applications that try to send data with very large payloads, low latency requirements, and/or with a high frequency of messages are not suitable for either NarrowBand IoT or LTE-M. Furthermore, Mobile IoT is not designed for use in real-time and streaming services, such as those of Critical IoT. Service providers should not attempt to use this more economical bit pipe as an alternative for conventional 2G / 3G / LTE applications. Failing to adapt IoT application behavior to the NB-IoT and LTE-M paradigms may not only prematurely drain a battery in an IoT device, but it may congest or damage the mobile operator's network. A few devices transmitting large payloads every minute, in close proximity to each other, may even congest a cell, taking service away from other paying customers. Keeping this in mind, it is troublesome to consider the quality of service that one should expect on the networks of those service providers not creating governance models to ensure No Harm to Network (NHTN), or even entering into marketing-driven "megabyte offering" competitions.

To enforce communication efficiency, the IoT Solution Optimizer compares your proposed IoT application traffic (application payload plus protocol overhead) to the market-specific rules bound to your selected tariff market. These limits determine the maximum allowed monthly data volume and the maximum number of events per day on the selected tariff network, whereby one “event” is considered to be the IoT application traffic configured in the “Payload and Protocols” screen.

If your IoT application is required to transfer more frequently or larger monthly payloads than allowed on a specific Mobile IoT network, please consider a more efficient communication implementation. There are several options available:

  • The transport layer protocol used may be too heavy for Mobile IoT; please use more suitable technologies, such as UDP or Non-IP
  • Minimize the number of parallel mobile network connections by aggregating data into less application reports before being compressed and sent over the mobile network.
  • Data transcoding and compression techniques can be used, as per the IoT service’s intended Quality of Service, to reduce connection attempts and data volumes
  • Enable higher throughput or more frequent communication by using LTE-M instead of NB-IoT; the IoT Solution Optimizer will soon have the option to model your application using this radio access technology

Please keep in mind that trying to force a use-case into a technology for which it was not designed for, may lead to costly operational and quality problems.

Factors affecting Power Consumption

LTE-M and NB-IoT are examples of Low Power Wide Area Network (LPWAN) technologies. Devices using these protocols to communicate are being deployed daily across the world to support a wide variety of IoT applications. With many devices being battery powered, it is crucial for application developers to configure their software and hardware in way that maximizes the low power potential of these radio access technologies, delivering the best possible service experience and the longest battery life.

Optimizing for battery life on constrained devices can be challenging. An industry best practice is to use a device planning tool that can simulate device performance, accounting for the deployment model and environmental aspects, network feature support and coverage quality, as well as hardware and software (application) characteristics. The IoT Solution Optimizer is such a simulation platform which can help IoT enterprises save time and money, by helping them develop highly performant products more effectively.

In “Chapter 2: Project Analysis” of the IoT Solution Optimizer’s “Project Summary Report,” you will find two pie charts that help visualize where power consumption is occurring:

  • Composite Power Consumption – The contributions of communication activity, hardware power consumption, and a positioning service (if available) to the overall power consumption;
  • Communication Power Consumption – The contribution of different LTE-M and/or NB-IoT procedures to the overall power consumption caused by communication activity.

Each pie chart can give an indication which highlights to developers where optimization can be made. It may be that the device is hardware-constrained, meaning that the optimization efforts should focus on reducing the overall power consumption caused by components such as the microcontroller (MCU), sensors, or voltage regulation. On the other hand, devices spending too much power on location tracking should try to optimize the cadence of GNSS satellite tracking. Finally, communication procedures in the Connected Mode or Idle Mode can be optimized by using Mobile power saving features. For more information on the individual pie chart components, please refer to the summary below.

Figure: Composite Power Consumption (Example)

Source: IoT Solution Optimizer project; color may vary.

The "Composite Power Consumption" pie chart describes the following for each CE-Level:

  • Communication Activity: All power consumption caused by the IoT device application’s reception and transmission of messages (payload + protocol overhead) are captured in this slice. For more information on the individual contributions of protocol-specific Uplink and Downlink procedures, please view the second pie chart, “Communication Power Consumption” available in the drop-down list within the IoT Solution Optimizer’s “Project Summary Report.” Please note that the quality of both the network's coverage or and the antenna integration influences the amount of power needed to communicate reliably; the effects of these aspects are captured here.
  • Hardware Power: Power losses caused by hardware elements other than the modem (chipset/module) or GNSS are characterized in this slice. This includes the microcontroller (MCU) which runs the IoT device application and steers the modem, as well as sensors, actuators, battery leakage, voltage conversion (LDO, DC/DC converters), circuitry for heat dissipation, etc.
  • Positioning Solution: If a GNSS solution has been used in an IoT Solution Optimizer project, its consumed power is characterized here. Please note that this slice can be optimized by changing the frequency of satellite scans, the duration of tracking windows, and the periodicity of ephemeris and A-GPS metadata downloading.]

Figure: Communication Power Consumption (Example)

Source: IoT Solution Optimizer project; colors may vary.

The "Communication Power Consumption" pie chart describes the following for each CE-Level:

  • Idle Mode (iDRX/eDRX): IoT Devices spend most of their lifetime in Idle Mode, during which their modems typically consume energy cyclically listening for incoming network messages within pre-defined timeslots. This discontinuous reception in LTE is a power saving feature called “Idle Mode Discontinuous Reception” (iDRX) and is automatically used by the chipset/module. Power consumption can be further reduced by using Enhanced DRX (eDRX), if available on both the network and modem, and activated by the IoT device application.
  • Power Saving Mode (PSM): This power saving feature, specified in 3GPPTM Release 12, is primarily used by Uplink-centric applications. It helps conserve battery power during the Idle Mode by disabling parts of the chipset protocol stack, thereby dropping power consumption for prolonged time periods into the micro-Ampere range. The reception of Downlink messages is not possible during the time that the modem is in a PSM deep sleep.
  • Tracking Area Updates - TAU Timer: In conformance to 3GPPTM Layer 3 procedures, the chipset protocol stack transmits TAU messages at regular intervals to keep the IoT Device's registration on the network active. This slice indicates the impact of such messages that are sent whenever the TAU timer (T3412) expires. As long as IoT Devices send TAU messages, their data context is kept and there is no need to reattach to the network to send or receive data.
  • Tracking Area Updates - Mobility Model: In conformance to 3GPPTM Layer 3 procedures, the chipset protocol stack transmits TAU messages at regular intervals to keep the network updated about the IoT device’s location. This slice indicates the impact of such messages that are sent whenever the IoT device crosses into a new network Tracking Area (TA) , as a consequence of its mobility. As long as IoT Devices send TAU messages, the network knows where to locate and page them.
  • Uplink Application Reporting: This value summarizes the power consumption for sending Uplink data, i.e., the amount of energy required for the actual transmission of application payload and higher-layer protocol overhead. It does not contain the energy required for setting up the radio connection, or for the remaining time that the modem stays in Connected Mode before the inactivity timer expires.
  • Downlink Application Reporting: This value summarizes the power consumption for sending Downlink data, i.e., the amount of energy required for the actual transmission of application payload and higher-layer protocol overhead. It does not contain the energy required for setting up the radio connection, or for the remaining time that the modem stays in Connected Mode before the inactivity timer expires.
  • Connected Mode Operations: The energy consumed here covers all protocol operations related to setting up and maintaining a RRC bearer in Connected Mode, except for those activities related to IoT Application reporting or Tracking Area Updates (TAU). This slide considers the Random Access procedure, the establishment of the RRC connection, the time spent during RRC Connected Mode during which data is neither sent nor received, and the the release of said connection.
  • Boot & NW Attach: The powering ON of the chipset/module and its subsequent attachment to the 3GPPTM radio network afterwards is covered here. This includes the power consumption of scanning for suitable networks after the boot occurs.
  • Roaming Network Scan & Attach: This slice includes the additional energy required to scan for a suitable, allowed roaming network once the device leaves its home network. The figure includes the energy required to attach and register onto the target visited network.
Security Layer Transport Layer Security (TLS)

Transport Layer Security (TLS) is a cryptographic security protocol for a secure communication between a server (e.g. cloud) and a client (e.g. a device). It is regarded as the successor to SSL (Secure Socket Layer), an older protocol for data encryption. TLS ensures a secure data exchange as it deploys mechanism for encryption, authentication and integrity, which serve as a foundation for secure communication over the Internet.

  • Encryption – TLS uses symmetric cryptography to encrypt transmitted data. At each start of the handshake session a unique key is generated for a connection between the client and server based on a negotiated shared secret. In this way any intruders are not able to decrypt the transferred data.
  • Authentication – the identity of the communicating peer is authenticated using asymmetric (public key) cryptography, in which each party verified peer’s certificate. It allows a verification of the source of the data.
  • Integrity - integrity check for each transmitted message ensures the reliability and integrity of the payload upon dispatch and receipt of the message. This prevents any undetected loss or alteration of the data during transmission.

TLS is designed to work over a reliable transport channel, such as the Transmission Control Protocol (TCP) on Transport Layer. It is stream-oriented, which means that it requires a data record in a particular sequence for correct data decryption. TLS is often being used for encryption of MQTT messages.

Figure: Characteristics of TLS

Using TLS is not mandatory, however when designing TCP-based IoT applications it is always recommended to use TLS for communication between a device and a remote server to avoid security risks, such as eavesdropping, tampering, or message forgery. Currently, TLS Versions 1.0 - 1.2 have been in use, whereas version 1.3 will bring further improvements for IoT applications. It is always recommended to use the latest version of the TLS for more security. Please note that the TLS session itself has a timer and is periodically renegotiated as part of the protocol between the client and server, in order to ensure that the connection remains secure.

Figure: TLS Handshake Procedure

There is an article in the Technology Cards library to help you learn how TLS works.

When using TLS it is upmost importance to ensure that the TCP session underneath does not time-out. In the figures below, an MQTT over TCP/IP message is sent between the client and server. In the first diagram, the period of communication is shorter than that of the mobile network operator (MNO) NAT firewall timer for TCP timeouts. In the second diagram, the period of communication is longer than the TCP timeout; however, a TCP keep alive message is periodically sent to reset the NAT firewall timer. The third diagram makes it clear that any failure to maintain the session will ultimately require the client to repeat the TLS handshake procedure again. Regular TLS handshakes may consume large amounts of data on constrained IoT devices, and ultimately shorten battery life.

Figure: MQTT over TCP/IP with TLS, Regular App Messages

Figure: MQTT over TCP/IP with TLS, Regular Keep-alives

Figure: New Handshake upon Failure to Maintain TCP Session

Datagram Transport Layer Security (DTLS)

Datagram Transport Layer Security (DTLS) is a security protocol for secure data transfer which ensures communication privacy for IoT applications. It was based on the extisting structure of TLS protocol, adding additional features to support datagram-based communication. It provides security guarantees equal to TLS for encryption, authentication, and integrity, preventing from potential security risks (e.g. eavesdropping, tampering, or message forgery). DTLS is designed to be working over an unreliable transport channel. Typically, the User Datagram Protocol (UDP) protocol on transport layer is used to handle a packet loss in case of connection break. The implemented mechanism allows therefore the protocol to reorder or resume the transfer of packets for transfer efficiency without compromising on security measures, for example, authentication or E2E encryption. CoAP messages are often encrypted using DTLS.

Figure: Characteristics of DTLS

Figure: DTLS Handshake Procedure

DTLS was created because the underlying UDP would not guarantee the transmission of the TLS handshake without a loss of packets or their arrival in the incorrect order. Also, since there is no session handshake at the beginning of a UDP session, a sender of “Client Hello” messages can conduct IP spoofing/DOS. This is prevented by using a stateless cookie during the establishment of each DTLS session.

Currently, DTLS Version 1.0 - 1.2 are in use, whereas version 1.3 shall brings further improvements for the IoT applications. It is always recommended to use the latest version of the DTLS to ensure more security.

There is an article in the Technology Cards library to help you learn how DTLS works.

Hypertext Transfer Protocol Secure (HTTPS)

Hypertext Transfer Protocol Secure (HTTPS) is a communication method using TLS (or formerly SSL) encryption over the standard application layer HTTP protocol to secure the data transfer over the Internet. HTTPS therefore utlizes the security mechanisms of TLS such as encryption, authentication and integrity to safeguard the data transfer. It was primarly introduced to add a layer of security over HTTP to protect against possible leakage of sensitive information (e.g. e-mails, login data, financial transactions over the Internet, etc.); however, it becomes relevant also when browsing the web, as all information on the content searched can remain private. As such, HTTPS is not a separate security protocol independent of TLS; as HTTP does not run over UDP, there is no equivalent of HTTPS for Datagram TLS (DTLS).

When using HTTPS it is upmost importance to ensure that the TCP session underneath does not time-out. In the figures below, an HTTP over TCP/IP message is sent between the client and server. In the first diagram, the period of communication is shorter than that of the mobile network operator (MNO) NAT firewall timer for TCP timeouts. In the second diagram, the period of communication is longer than the TCP timeout; however, a TCP keep alive message is periodically sent to reset the NAT firewall timer. Any failure to maintain the TCP session will ultimately require the client to repeat the HTTPS handshake procedure again. Regular HTTPS handshakes may consume large amounts of data on constrained IoT devices, and ultimately shorten battery life.

Figure: HTTPS over TCP/IP , Regular App Messages

Figure: HTTPS over TCP/IP, Regular Keep-alives

Which security protocol should I use?

Currently, the IoT Solution Optimizer does not model the impact of security cryptographic protocols for data encryption on the power performance of Mobile IoT devices. Such protocols ensure data integrity and privacy between IoT devices and their application servers over the Application Layer. In general, the following applies as guidance:

  • Selecting "None" is the most suitable option for many cases. Devices that use a private APN to communicate over the Mobile Network Operator's network are not reachable over the Internet by third parties, and are therefore sufficiently protected. Furthermore, if VPN tunnels are also used between operators, roaming traffic may be sufficiently protected. At then end, the type of data also plays a role; for example, the application payload may contain sensitive information that requires encryption.
  • Transport Layer Security (TLS) uses TCP transport, thereby requiring additional packages for acknowledgement messages. It employs long-term public and private keys to generate a short-term session key. The latter encrypts the data flow between the IoT device and server. X.509 certificates can be used to authenticate either side. Certificate authorities and public key certificates are therefore required verify the relation between the certificate and its owner, as well as to generate, sign, and administer their validity.
  • Strictly speaking, Hypertext Transfer Protocol Secure (HTTPS) is not a separate security protocol, but refers to use of HTTP over an encrypted SSL connection. It is therefore commonly referred to as HTTP over TLS. Without TLS's encryption, HTTP is vulnerable to eavesdropping and man-in-the-middle attacks. All message contents, including the HTTP headers and the request/response data are encrypted in HTTPS.
  • Datagram Transport Layer Security (DTLS) is optimized for the transfer of datagrams, basic transfer units of data used by connectionless communication services across a packet-switched network. With datagrams, the delivery, arrival time, and order of arrival of datagrams must not be guaranteed by the network. This makes it possible to avoid excessive messaging acknowledging receipt of individual packets at the destination. DTLS uses UDP transport and provides similar security guarantees as TLS. As it employes UDP, the application has to support packet reordering, loss of datagrams, and larger data sizes.

In the coming months, support for TLS, DTLS and HTTPS will be included in the IoT Solution Optimizer.

Figure: Transport Encryption Options

How does TLS work?

TLS consists of a handshake operation between the server and client. The server is normally an IoT application running on a server or IoT service on the Internet, whereas the IoT device usually plays the role of client.

The TLS handshake procedure can be divided into three stages, as shown below:

Stage 1: Client and Server Hello

  • Initially, the server has a Private Key.
  • The Client Hello message to the server specifies:
    • Highest TLS protocol version (e.g. 1.1, 1.2) supported by the client
    • Random Number
    • List of proposed cipher suites and compression methods
  • A Server Hello is sent to the client:
    • Chosen protocol version
    • Random Number
    • Selected cipher suite and compression method
    • Session ID as part of the message to perform handshake
  • The Certificate is sent to the client if required by the selected cipher suite, who then checks its validity.
  • A Server Key Exchange is sent to the client if required by the selected cipher suite, including:
    • The server Public Key, which the client uses to encrypt data which the server can later decrypt with its Private Key
  • The Server Hello Done message completes the handshake negotiation.

Figure: Client and Server Hello

Stage 2: Set-up of Asymmetric Encryption

  • The client generates and encrypts a PreMasterSecret with the Public Key.
  • The Client Key Exchange message to the server contains the PreMasterSecret, which the server can decrypt with its Private Key.
  • Both client and server use their Random Numbers and PreMasterSecret to compute a common secret, the “Master Secret." It is now possible to use asymmetric encryption (from the client to server).
  • The Change Cipher Spec is sent to the server to inform that everything the client sends henceforth will be encrypted with the MasterSecret.
  • The client sends an encrypted Finished message to the server containing a hash and MAC over the previous handshake messages.

Figure: Set-up of Asymmetric Encryption

Stage 3: Set-up of Symmetric Encryption

  • The server attempts to decrypt the client's Finished message with the MasterSecret and verifies the hash and MAC.
  • The Change Cipher Spec is sent to the client to notify that everything the server sends henceforth will be encrypted with the MasterSecret.
  • The server concludes by sends its own encrypted Finished message containing a hash and MAC, which the client decrypts with the MasterSecret. At this point, the TLS handshake is complete.

Figure: Set-up of Symmetric Encryption

Upon the conclusion of Stage 3, the application protocol is enabled, and all application messages exchanged henceforth between the client and server will also be encrypted exactly like their Finished messages.

How does DTLS work?

DTLS consists of a handshake operation between the server and client. The server is normally an IoT application running on a server or IoT service on the Internet, whereas the IoT device usually plays the role of client.

The DTLS handshake procedure can be divided into stages, as shown below:

Stage 1: Client and Server Hello

  • Initially, the server has a Private Key.
  • A Client Hello is sent to the server.
  • A Hello Verify Request is sent back to the client containing a stateless cookie preventing Denial-of-Service (DOS) attacks.
  • The Client Hello is sent again to the server with the anti-DOS cookie value.
  • The TLS Server Hello handshake procedure is adopted by DTLS.
  • A Certificate may be sent to the client if required by the selected cipher suite, who then checks its validity.
  • A Server Key Exchange is sent to the client if required by the selected cipher suite, including:
    • The server Public Key, which the client uses to encrypt data which the server can later decrypt with its Private Key
  • The Server Hello Done message completes the handshake negotiation.

Figure: Client and Server Hello

Please note: Messages in red will not be sent if there are pre-shared key cipher suites.

  • Stage 2: Set-up of Asymmetric Encryption
    • A Certificate may be sent to the server if required by the selected cipher suite, who then checks the validity. This mechanism is in place due to the underlying UDP which is not as reliable as TLS' TCP protocol.
    • The client then generates and encrypts a PreMasterSecret with the Public Key.
    • The Client Key Exchange message to the server contains the PreMasterSecret, which the server can decrypt with its Private Key.
    • Both client and server use their Random Numbers and PreMasterSecret to compute a common secret, the “Master Secret." It is now possible to use asymmetric encryption (from the client to server).
    • If the client had sent a Certificate, a Certificate Verify is sent to the server.

Figure: Set-up of Asymmetric Encryption

Please note: Messages in red will not be sent if there are pre-shared key cipher suites.

  • Stage 3: Set-up of Symmetric Encryption
    • The Change Cipher Spec is sent to the server to inform that everything the client sends henceforth will be encrypted with the MasterSecret.
    • The client sends an encrypted Finished message to the server containing a hash and MAC over the previous handshake messages.
    • The server attempts to decrypt the client's Finished message with the MasterSecret and verifies the hash and MAC.
    • The Change Cipher Spec is sent to the client to notify that everything the server sends henceforth will be encrypted with the MasterSecret.
    • The server concludes by sends its own encrypted Finished message containing a hash and MAC, which the client decrypts with the MasterSecret. At this point, the DTLS handshake is complete.

Figure: Set-up of Symmetric Encryption

Upon the conclusion of Stage 3, the application protocol is enabled, and all application messages exchanged henceforth between the client and server will also be encrypted exactly like their Finished messages.

oneM2M IoT Standard oneM2M Service Layer

The horizontal architecture standardized by oneM2M defines an IoT Service Layer, a "middleware" sitting between processing and communication hardware, and IoT applications, providing a rich set of functions needed by many IoT applications. It solves many common problems found in the world of M2M and brings numerous benefits. oneM2M supports among others:

  • Secure end-to-end data/control exchange between IoT devices and custom applications by providing functions for proper identification
  • Authentication, authorization, encryption
  • Remote provisioning & activation
  • Connectivity setup
  • Buffering
  • Scheduling
  • Synchronization
  • Aggregation
  • Group communication
  • Device management

oneM2M’s Service Layer is typically implemented as a software layer and sits between IoT applications and processing or communication hardware and operating system elements that provide data storage, processing and transport, normally riding on top of IP. However, Non-IP transports are also supported via interworking proxies. The oneM2M Service Layer provides commonly needed functions for IoT applications across different industry segments.

Figure: IoT Service Layer

The IoT Service Layer is like an Operating System (OS) for the Internet of Things, sitting on field devices/sensors, gateways and in servers, and providing for Common Service Functions:

  • Applications control the Connectivity Layer and built-in sensors via APIs provided by the Operating System. This results in that applications become portable.
  • The Operating System, in turn, collects data transfer requests from applications. The OS optimizes and controls use of the network by the device and provides security.
  • Finally, the Connectivity Layer provides access to the Internet via the wired and wireless networks.

For example, the following “vertical” domains are isolated silos which makes it difficult to exchange data between each other. Using a “horizontal” architecture allows the provision of a seamless interaction between applications and devices, even of different manufacturers. In the below use case, a security application detects that when no personnel is in the building, it shall switch off the light and stops the air conditioning unit.

Figure: Interconnected M2M Silos make a true IoT

Without oneM2M, the market remains highly fragmented with limited vendor-specific applications. As the same services are developed again and again, the industry focuses on reinventing the wheel instead of on service innovations. Each M2M silo contains its own technologies without interoperability.

Figure: IoT Protcol Options

With oneM2M, an end-to-end platform is implemented that offers a common service capabilities layer to an ecosystem of compatible M2M silos. Interoperability at the level of data and control exchanges are done via uniform APIs. This provides for seamless interaction between heterogeneous applications and devices. And, most importantly, oneM2M is a global ETSI standard – not a proprietary framework controlled by a single private company.

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/service-layer

oneM2M Horizontal Architecture

oneM2M defines a horizontal architecture providing common services functions that enable applications in multiple domains, using a common framework and uniform APIs.

Using these standardized APIs make it much simpler for M2M/IoT solution providers to cope with complex and heterogeneous connectivity choices by abstracting out the details of using underlying network technologies, underlying transport protocols and data serialization. This is all handled by the oneM2M Service Layer without a need for the programmer to become an expert in each of these layers. Therefore, the application developer can focus on the process/business logic of the use case to be implemented and does not need to worry about how exactly the underlying layers work. This is very much like writing a file to a file system without worrying how hard disks and their interfaces actually work.

The IoT Service Layer specified in oneM2M can be understood as a distributed operating system for IoT providing uniform APIs to IoT applications in a similar way as a mobile OS does for the smartphone ecosystem. oneM2M decouples device, cloud, and application using open interfaces. This ensures that cloud- and network-specific M2M services can transcend into an open IoT framework which is cloud and network agnostic.

Figure: oneM2M is Cloud Provider-independent

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/service-layer

Application Entity (AE)

The oneM2M Layered Model comprises three layers:

  • Application Layer
  • Common Services Layer
  • underlying Network Services Layer

The Application Entity is an element in the Application Layer that implements an M2M application service logic. Each application service logic can be resident in a number of M2M nodes and/or more than once on a single M2M node. Each execution instance of an application service logic is termed an "Application Entity" (AE) and is identified with a unique AE-ID. AEs are typically part of oneM2M Nodes. Examples of the AEs include an instance of a fleet tracking application, a remote blood sugar measuring application, a power metering application, or a pump controlling application.

Figure: oneM2M Layered Model

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/functional-architecture

Common Service Entity (CSE)

The oneM2M Layered Model comprises three layers:

  • Application Layer
  • Common Services Layer
  • underlying Network Services Layer

A Common Service Entity represents an instantiation of a set of "common service functions" of the oneM2M Service Layer. A CSE is actually the entity that contains the collection of oneM2M-specified common service functions that AEs are able to use. Such service functions are exposed to other entities through the Mca (exposure to AEs) and Mcc (exposure to other CSEs) reference points. Reference point Mcn is used for accessing services provided by the underlying Network Service Entities such as waking up a sleeping device. Each CSE is identified with a unique CSE-ID. CSEs are typically part of oneM2M Nodes. Examples of service functions offered by the CSE include: data storage & sharing with access control and authorization, event detection and notification, group communication, scheduling of data exchanges, device management, and location services.

Figure: oneM2M Layered Model

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/functional-architecture

Network Services Entity (NSE)

The oneM2M Layered Model comprises three layers:

  • Application Layer
  • Common Services Layer
  • underlying Network Services Layer

A Network Services Entity provides services from the underlying network to the CSEs. Examples of such services include location services, device triggering, certain sleep modes like PSM in 3GPPTM based networks or long sleep cycles.

Figure: oneM2M Layered Model

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/functional-architecture

Common IoT Problems Solved by oneM2M

oneM2M solves numerous problems present in M2M, enabling for a true IoT to emerge:

  • Application Area - oneM2M provides globally standardized interfaces for the application developers (device and cloud). It also enables application portability across devices and platforms.
  • Data Interoperability - oneM2M provides services towards the application (registration and discovery, subscriptions and notification services, secure communication, device management, etc.). It enables device portability, meaning that a device can be connected to any infrastructure solution.
  • Connectivity - oneM2M stores data in case of lack of connectivity. It can control the device's usage of connectivity (when and how often communication happens), serving additionally as a network protection mechanism.

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/service-layer

oneM2M Reference Points

The oneM2M functional architecture defines the following reference points:

  • Mca: Reference point for the communication flows between an Application Entity (AE) and a Common Services Entity (CSE). These flows enable the AE to use the services supported by the CSE, and for the CSE to communicate with the AE. The AE and the CSE may or may not be co-located within the same physical entity.
  • Mcc: Reference point for the communication flows between two Common Services Entities (CSEs). These flows enable a CSE to use the services supported by another CSE.
  • Mcn: Reference point for the communication flows between a Common Services Entity (CSE) and the Network Services Entity (NSE). These flows enable a CSE to use the supported services provided by the NSE. While the oneM2M Service Layer is, usually independent of the underlying network – as long as it supports IP transport – it leverages specific M2M/IoT optimization such as 3GPP’s eMTC features (e.g. device triggering, power saving mode, long sleep cycles, etc).
  • Mcc’: Reference point for the communication flows between two Common Services Entities (CSEs) in Infrastructure Nodes (IN) that are oneM2M compliant and that reside in different M2M Service Provider domains.

Additional reference points are defined in oneM2M for specific purposes such as enrolment functions, etc. These are not detailed in this overview.

Figure: oneM2M Reference Points

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/functional-architecture

oneM2M Nodes

oneM2M has defined a set of "Nodes" that are logical entities identifiable in the oneM2M System. oneM2M Nodes typically contain CSEs and/or AEs. For the definition of Node types, oneM2M distinguishes between Nodes in the “Field Domain” – i.e. the domain in which sensors / actors / aggregators / gateways are deployed – and the “Infrastructure Domain” – i.e. the domain in which servers and applications on larger computers reside.

Nodes can be of the following types:

  • Application Dedicated Node (ADN): a Node that contains at least one AE and does not contain a CSE. It is located in the Field Domain. An ADN would typically be implemented on a resource constraint device that may not have access to rich storage or processing resources and – therefore – may be limited to only host a oneM2M AE and not a CSE. Examples for devices that would be represented by ADNs: simple sensor or actor devices.
  • Application Service Node (ASN): a Node that contains one CSE and contains at least one Application Entity (AE), located in the Field Domain. An ASN could be implemented on a range of different devices ranging from resource constraint devices up to much richer HW. Examples for devices that would be represented by ASNs: data collection devices, more capable sensors and actors including simple server functions.
  • Middle Node (MN): a Node that contains one CSE and could contain AEs. MNs are located in the Field Domain. There could be several MNs in the Field Domain of the oneM2M System. Typically an MN would reside in an M2M Gateway. MNs would be used to establish a logical tree structure of oneM2M nodes, e.g. to hierarchically aggregate data of buildings / neighborhoods / cities / counties / states etc.
  • Infrastructure Node (IN): a Node that contains one CSE and could contain AEs. There is exactly one IN in the Infrastructure Domain per oneM2M Service Provider. An example of physical mapping, an IN could reside in an M2M Service Enablement Infrastructure.
  • Non-oneM2M Node (NoDN): This Node type is not shown in the figure above. oneM2M specifications also define a Node Type for non-oneM2M Nodes which are Nodes that do not contain oneM2M Entities (neither AEs nor CSEs). Typically such Nodes would host some non-oneM2M IoT implementations or legacy technology which can be connected to the oneM2M system via interworking proxies.

Figure: oneM2M Node Topology

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/functional-architecture

Common Service Functions

As a horizontal architecture providing a common framework for IoT, oneM2M went through a large number of IoT use cases and identified a set of common requirements which resulted in the design of this set of tools termed Common Service Functions. Think of these functions as a large toolbox with special tools to solve a number of IoT problems across many different domains. Very much like a screw driver can be used to fasten screws in a car as well as in a plane, the oneM2M Common Service Functions (CSFs) are applicable to different IoT use cases. Furthermore, oneM2M has standardized how these functions are being executed, i.e. is has defined uniform APIs to access these functions.

Figure: Common Service Functions

The services above reside within a CSE and are referred to as CSFs. They provide services to the AEs via the Mca reference point and to other CSEs via the Mcc reference point.

All these services are not specific to any IoT domain in particular. It enables each domain to build on the top of this service layer and really focus on its specific industrial needs. This is similar to functions of a generic operating system (OS) exposed to applications running on that OS. For instance, many applications read and write to files. File I/O is typically provided by the OS. oneM2M’s Service Layer provides similar functions in a generic way to many different IoT applications.

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/common-service-functions

Benefits of using oneM2M

There are numerous benefits in using oneM2M in your IoT deployment:

Easy interworking and integration with existing and evolving deployments paves the way to long term evolution and sustainable economy.

  • Does not disrupt existing “vertical deployment”, but evolves it. oneM2M supports interworking with legacy technology.
  • Interworking with a rich set of proximal IoT technologies, embracing different ecosystems.
  • Takes advantage of the operators’ network capabilities and existing management technologies.

A Service Layer on top of the transport network supporting a choice of transport protocols and serializations of data/message.

  • Flexibility: oneM2M can be deployed on all domains, and is not tied to a particular protocol technology.
  • IP based: It relies on known existing APIs to handle IP communications.
  • Aware of optimizations if underlying network is 3GPP-based: oneM2M can make use of policy-based scheduling, power saving mode, triggering /wakeup of devices, non-IP data transport, etc., without the need for the developer to be aware of these terms.
  • Enhances data sharing efficiency: Communications over an underlying network are policed by provisioned policies that govern the use of network resources based on configurable categories of events/messages. oneM2M avoids storm of low-value messages in networks with costly resources. Lowers Opex. For example, in use cases with a need for fast & compact message exchanges one may want to rely on TCP sockets (opened via web sockets) and use binary serialization (e.g. CBOR) whereas in other cases a combination of HTTPS/JSON may be preferable for simpler debugging.
  • Evolution: oneM2M-supported transport protocols and/or message serialization can evolve while the oneM2M code will not change. This allows for easy adaptation to future transport technologies.

Horizontal platform provides common service functions that enable multiple IoT domains.

  • One investment/deployment serves multiple domains, does not re-invent the wheel. Lowers Capex.
  • No need to maintain domain-specific platforms, reduction in Capex
  • Cross-domain service/application innovation with a common framework and uniform APIs, allows for sharing of information and processes across domains that were isolated thus far (e.g. home security system versus heating system). It supports new business opportunities.
  • Re-use of the code whatever the domain was, increasing reusability and lowering CapEx.

Data sharing and semantic interoperability brings the real value.

  • Data-oriented RESTful API design.
  • Semantic data annotation, discover and reasoning facilitates intelligent analytics and service mashups.
  • Security protection at both channel and object level, with static and dynamic access control.

Open standards to avoid lock-in to a platform or a cloud provider.

  • No single party or company controls the technology / features.

Several open source implementations available (for CSE or AE).

oneM2M is the international standard for IoT.

  • Developed using standardization methodology that has insured successful interoperability in many technical domains, using the same process as in 3GPPTM.
  • Developed by many companies based on consensus. It does not depend on a single or a small number of companies, and is not using a closed proprietary technology.
  • It is an open standard with transparent development process and open access to all deliverables. All the specifications, even drafts are available at: http://www.onem2m.org/technical/latest-drafts

Source: http://www.onem2m.org/getting-started/onem2m-overview/introduction/benefits-of-using-onem2m

Representational State Transfer (REST)

Representational State Transfer is a software architectural style that defines a set of constraints to be used for creating web services.

RESTful services allow the requesting systems to access and manipulate textual representations of resources by using a uniform and predefined set of stateless operations. A stateless protocol operation does not require the server to retain session information or status about each communicating partner for the duration of multiple requests.

REST is not a protocol. It is about manipulating resources, uniquely identified by URIs. A resource is stateful and contains a link pointing to another resource. All the actions on resources are done through a Uniform Interface. As REST is an architecture style, it can be mapped to multiple protocols such as HTTP, CoAP, etc.

There are six guiding constraints that define a RESTful system. These constraints restrict the ways in which the server can process and respond to client requests:

Client-server: separation of concerns is the principle behind the client-server constraints.

  • Stateless server: request from client to server contains all of the information necessary to understand the request, and cannot take advantage of any stored context on the server.
  • Cache: the client can reuse response data, sent by the server, by storing it in a local cache.
  • Layered system: allows an architecture to be composed of hierarchical layers. It enables the addition of features like a gateway, a load balancer, or a firewall to accommodate system scaling.
  • Code-on-demand: (optional) REST allows client functionality to be extended by downloading and executing code in the form of scripts (e.g. JavaScript).
  • Use of a uniform interface: This concerns identification of resources using a resource identifier, which enables the identification of the particular resource involved in an interaction between components. Manipulation of resources through representations is also possible. Resource representations are the state of a resource that is transferred between components. Self-descriptive messages contain metadata to describe the meaning of the message. Hypermedia is used as the engine of application state or HATEOAS. Clients find their way through the API by following links available in the resource representations.

oneM2M REST APIs are used to manipulate data generated by the Application Entity (AE) to the oneM2M Service platform (CSE) as well as data retrieve services.

Source: http://www.onem2m.org/getting-started/onem2m-overview/rest-architecture

oneM2M APIs

oneM2M REST APIs are used by CSEs and AEs to communicate with one another. The communication can be originated from an AE or CSE depending on the operation. Communication is done via the exchange of oneM2M primitives across the oneM2M defined reference points (Mca/Mcc/Mcc’).

The APIs are developed for handling CRUD+N (Create, Retrieve, Update, Delete and Notification) operations for oneM2M resources specified in oneM2M standards. Each oneM2M API includes the following components:

  • Primitives
  • Resources + Attributes
  • Data Types
  • Protocol Bindings
  • Procedures (CRUD+N)

Primitives are used to perform CRUD+N operations on resources hosted by CSEs or to send notifications to AEs. Each CRUD+N operation is comprised of a pair of Request and Response primitives. Access and manipulation of the resources is subject to access control privileges.

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api

oneM2M Primitives

Primitives are service layer messages transmitted over the Mca/Mcc/Mcc’ reference points. Originators send requests to Receivers via primitives. Originator and Receiver can be an AE or a CSE. Each CRUD+N operation consists of one request and one response primitive.

Figure: General Primitives Flow

Primitives are bound to underlying transport layer protocols such as HTTP, CoAP, MQTT or WebSocket. Primitives are generic with respect to underlying network transport protocols. Each primitive is also bound to zero or more messages in the Transport Layer.

Figure: oneM2M Communications

A primitive consists of two parts; control and content:

  • The control part contains parameters required for the processing of the primitive itself (e.g. request or response parameters). Primitives are encoded and serialized based on the particular oneM2M protocol binding being used. The originator and receiver of each primitive use the same binding, and thus use compatible forms of encoding/decoding and serialization/de-serialization. During transfer, the control part is encoded based on the protocol binding being used and the content portion is serialized using XML, JSON and CBOR.
  • The content part is optional based on the type of primitive; it contains the serialized representation of the resource (using XML, JSON or CBOR), and consisting of all or a subset of the resource attributes.

Figure: oneM2M Primitive

Figure: Example of Control Part bound to HTTP

Figure: Example of <container> Resource Representation (JSON)

Figure: Example of <container> Resource Representation (XML)

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api/onem2m-primitives

oneM2M Resources

All entities in the oneM2M system, such as AEs, CSEs, application data representing sensors, commands, etc., are represented as resources in the CSE. Each resource has its own specific type with a defined set of mandatory and optional attributes, as well as child resources. Each resource is addressable and can be the target of CRUD operations specified in oneM2M primitives.

Figure: oneM2M Resource Template

The root of the oneM2M resource structure is <CSEBase>, which is assigned an absolute address. All other child resources are addressed relative to <CSEBase>. Dpending on the type of child resource, it can be instantiated 0...n times.

Figure: <CSEBase> Resource Structure

Each resource contains attributes that store information pertaining to the resource itself. The attributes are :

  • Universal attributes, appearing in all resources
  • Common attributes, appearing in more than one resource, and having the same meaning whenever they do appear
  • Resource-specific attributes

Figure: oneM2M Attributes

oneM2M defines XML, JSON, and CBOR schemas which define the attributes of each resource type. These schemas bind oneM2M attributes to well-known data types defined by XML schema definitions (e.g. xs:string, xs:anyURI, etc.). Schemas also bind oneM2M attributes to oneM2M-defined data types (e.g. m2m:id, m2m:stringList, etc.).

Figure: Example oneM2M Schema

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api/onem2m-resources

oneM2M Procedures Overview of oneM2M Procedures

The oneM2M standard defines several procedures and operations:

  • Access Resources in Local CSE
  • Access Resources in Remote CSE
  • CREATE operation
  • UPDATE operation
  • RETRIEVE operation
  • NOTIFY operation
  • DELETE operation (optional)

With the IoT Solution Optimizer, you can model each of the operations listed above while building your "virtual twin" IoT device.

Access Resources (Local CSE)

Accessing resources on a local CSE is one of the supported oneM2M procedures.

Figure: Access Resources in Local CSE

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api/onem2m-procedures

Access Resources (Remote CSE)

Accessing resources on a remote CSE is one of the supported oneM2M procedures.

Figure: Access Resources in Remote CSE

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api/onem2m-procedures

CREATE Operation

CREATE is one of the defined oneM2M operations. The following document details the contents of CREATE Requests and CREATE Responses: oneM2M CREATE Operation.

Figure: CREATE Operation

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api/onem2m-procedures

RETRIEVE Operation

RETRIEVE is one of the defined oneM2M operations. The following document details the contents of RETRIEVE Requests and RETRIEVE Responses: oneM2M RETRIEVE Operation.

Figure: RETRIEVE Operation

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api/onem2m-procedures

UPDATE Operation

UPDATE is one of the defined oneM2M operations. The following document details the contents of UPDATE Requests and UPDATE Responses: oneM2M UPDATE Operation.

Figure: UPDATE Operation

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api/onem2m-procedures

NOTIFY Operation

NOTIFY is one of the defined oneM2M operations. The following document details the contents of NOTIFY Requests and NOTIFY Responses: oneM2M NOTIFY Operation.

Figure: NOTIFY Operation

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api/onem2m-procedures

DELETE Operation

DELETE is one of the defined oneM2M operations. The following document details the contents of DELETE Requests and DELETE Responses: oneM2M DELETE Operation.

Figure: DELETE Operation

Source: http://www.onem2m.org/getting-started/onem2m-overview/application-program-interfaces-api/onem2m-procedures

Data Collection Data Collection Principles (Container)

The container for data instances is represented by the <container> resource. It is a data storage used to share information with other entities and track data. Two possibilities exist:

  • <container> resource has no associated content. Only attributes and child resources are available. Actual data/content is stored in <contentInstance> child resource.
  • <container> is the only resource allowed to have recursive child resources. The <container> resource can have other <container> as child resources. This is useful for representing hierarchical data structure.

Figure: Example Resource Tree with Containers

Access Control Policies (ACPs) are used by the CSE to control access to the resources. The resources are always linked to Access Control Policies. Furthermore, these ACPs are shared between several resources. Finally, Subscription and Notification, as well as resource Discovery capabilities are required to share information.

Source: http://www.onem2m.org/getting-started/onem2m-overview/data-collection-principles/container

Access Control Policy

Access Control Policies contain the rules (Privileges) defining:

  • WHO can access the resource (e.g. identifiers of authorized AE/CSE)
  • For WHAT operation (CREATE, RETRIEVE, UPDATE, DELETE, etc.)
  • Under WHICH contextual circumstances (time, location, IP address)

ACPs are represented by <accessControlPolicy> resources. They are comprised of attribute privileges and selfPrivileges representing a set of access control rules for entities. Common attribute accessControlPolicyIDs link resources that are not <accessControlPolicy> resources to <accessControlPolicy> resources:

  • All resources are accessible only if the privileges from the ACP grants it
  • All resources have an associated accessControlPolicyIDs attribute, either explicitly or implicitly

Figure: <accessControlPolicy> Resource Content

Figure: <accessControlPolicy> Example

Whereby, the listed attribute are:

  • acr = "Access Control Rule"
  • acor = "Access Control Originators"
  • acop = "Access Control Operations"

Source: http://www.onem2m.org/getting-started/onem2m-overview/data-collection-principles/access-control-policy

Resource Discovery

Under the RESTful architecture, resource Discovery can be accomplished using RETRIEVE operation by an Originator. The use of the "filterCriteria" parameter allows limiting the scope of the results. Type, Labels, Content Size, etc., can be configured in the parameter.

Figures: Discovery Example

Source: http://www.onem2m.org/getting-started/onem2m-overview/data-collection-principles/discovery

Resource Subscription and Notification

Events generated by resources can be received using the <subscription> resource. The <subscription> resource contains subscription information for its "subscribed-to" resource and is a child resource of the "subscribed-to" resource. The originator (resource subscriber) has RETRIEVE privileges to the "subscribed-to" resource in order to create the <subscription> resource.

Notification policies specified in the attributes can be applied to the <subscription>. These specify which, when, and how notifications are sent. For example, "batchNotify" receives batches of notification rather than one message at a time.

Figure: Example of Subscription and Notification

Source: http://www.onem2m.org/getting-started/onem2m-overview/data-collection-principles/subscription-notification

oneM2M Open Source Libraries

There is a growing set of open source libraries available online to help developers build powerful, oneM2M-based IoT solutions. Below are a few examples:

A collection of operating system and software projects using oneM2M can also be found here:

Please not that the ETSI oneM2M website will soon publish online content where hand-on information, including runtime code will be exposed for developers and makers of IoT devices. Preliminary information can be found at the following pages, managed by the contributing author at oneM2M:

Certifications & Alliances Global Certification Forum (GCF)

The Global Certification Forum (GCF) is a global, non-profit organization that promotes mobile and IoT device certification programs for conformity to agreed interoperability standards. Shaping the industry since 1999 with mobile technology at its core, the GCF today has well over 300 members, including major Mobile Network Operators and MVNOs, device and IoT manufacturers, as well as companies in the testing industry. Together, they collaboratively in association with key partners to ensure that GCF certification programs fit the industry's needs today and tomorrow.

With certification programs continually evolving - recent additions include 5G, oneM2M, automotive C-V2X, RSP and mission critical services MCPTT, GCF provides the confidence in connectivity wherever you are. Products incorporating cellular mobile & IoT connectivity can be certified, such as:

  • 5G devices
  • M2M/IoT products
  • Connected consumer devices
  • Smartphones and feature phones
  • Tablets
  • USB modems/chipsets
  • Portable WiFi hotspots
  • Embedded modules
  • Laptops

The GCF scheme evolves in sync with developments in mobile technologies and the changing needs of the industry. GCF is a Market Representation Partner of 3GPP and liaises with standards and industry associations to support the successful development and deployment of 5G. It is the only wireless product certification scheme covering all these technologies:

  • 5G
  • NB-IoT, Cat-M1 (Mobile IoT)
  • LTE, LTE-Advanced, LTE-Advanced-Pro (4G)
  • UMTS (3G)
  • CDMA2000 (3G)
  • GSM (2G)

Source: https://www.globalcertificationforum.org/

Global M2M Association (GMA)

M2M is a fast-moving and dynamic environment where multinational customers require easy-to deploy, international solutions, with a high quality of service and consistent customer experience across networks. In such a fragmented and complex business ecosystem, alliances and partnerships are vital to develop the M2M market and seize growth opportunities. The Global M2M Association (GMA) is an association of leading mobile operators with world-class networks, including Bell, Deutsche Telekom, Orange, Softbank, Swisscom, Telia Company, and TIM. Its mission is to deploy and manage enhanced and seamless M2M services worldwide. GMA members have proven experience and know-how in supporting business-critical M2M services and collectively connect tens of millions of M2M devices. As established mobile operators with a long history in M2M, they offer outstanding levels of support and customer care, and develop global solutions that meet the demands of emerging M2M applications. By joining GMA, operators are able to respond to global business requirements, fostering innovation by co-building M2M solutions and building a thriving M2M ecosystem with leading partners. They can leverage the GMA footprint of enhanced M2M connectivity services throughout 42 countries in Europe and in key markets in North America, South America and Asia, to efficiently deploy M2M services.

All GMA members are represented in all committees and work streams. A Steering Committee defines a shared vision, and mission, in order to reach the goals and objectives of the GMA. In parallel, the Program Management Office translates strategic directives into operational objectives by coordinating all projects and work streams.

There are currently four work streams running in GMA:

  • The Membership work stream objective is to identify global MNOs and coordinate their acquisition as partners to implement GMA global development strategy;
  • Product work stream objective is to define and develop products offered jointly by the GMA partners and to coordinate the implementation of joint products;
  • Communication work stream objective is to maximize the global visibility of GMA;
  • Module Certification work stream is managing the GMA certification program by partnering with best in class module vendors.

For more information regarding the benefits of GMA Membership, download an overview about us, or visit our website: http://globalm2massociation.com.

GMA Certification Program

The GMA Certification Program ensures optimized interoperability between hardware and networks, leading to a far quicker and greatly improved integration of your M2M devices. All of our approved modules are certified to work across the GMA footprint, so that enterprises can be assured their devices will work seamlessly although roaming in different GMA countries. Key benefits of a GMA certification include an accelerated time-to-market, reduced business risk, optimized inventory, and an improved experience for end-users. The following modules are GMA-certified:

  • gemalto Cinterion® EHS5-E
  • gemalto Cinterion® EHS6-A
  • gemalto Cinterion® PHS8-P
  • Sierra Wireless AirPrime® MC7710
  • Sierra Wireless AirPrime® Q2687
  • Telit GE866-QUAD
  • Telit GE910-QUAD
  • Telit GE910-QUAD V3
  • Telit GL865-DUAL V3
  • Telit GL865-QUAD V3
  • Telit HE910-EUR
  • Telit LE910-EU1
  • Telit LE910-EU V2
  • Telit UE865-EUR
  • Telit UE866-EU
  • Telit UE910-EUR
GSM Association (GSMA)

The GSM Association (GSMA) represents the interests of mobile network operators worldwide, uniting more than 750 operators with almost 400 companies in the broader mobile ecosystem, including handset and device makers, software companies, equipment providers and internet companies, as well as organizations in adjacent industry sectors. The GSMA also produces the industry-leading MWC events held annually in Barcelona, Los Angeles and Shanghai, as well as the Mobile 360 Series of regional conferences. GSMA's unrivalled global services encompass improved message delivery for ported fixed and mobile numbers, enhanced network services through unique device analytics, device crime reduction through data sharing, and an intelligence unit which houses extensive industry statistics and forecasts.

The GSMA drives three key industry programs - for Future Networks, Identity, and IoT. The Internet of Things program is an initiative to help operators add value and accelerate the delivery of new connected devices and services in the IoT. This is to be achieved by industry collaboration, appropriate regulation, optimizing networks as well as developing key enablers to support the growth of the IoT in the longer term. GSMA's vision is to enable the IoT, a world in which consumers and businesses enjoy rich new services, connected by an intelligent and secure mobile network.

Advocacy initiatives are also part of GSMA's mission, including:

  • External Affairs & Industry Purpose
  • Mobile for Development
  • Public Policy Spectrum
  • The Mobile Economy
  • GSMA Training

GSMA members are at the center of the discussions, decisions, and initiatives that shape the future of mobile communications and expand opportunities for the whole industry. Membership in the GSMA keeps your business in touch, forward-thinking and competitive.

For more information, please visit the GSMA corporate website at https://www.gsma.com, or follow the GSMA on Twitter: @GSMA.

PCS Type Certification Review Board (PTCRB)

PTCRB has been the certification forum for select North American mobile network operators since 1997. It is a pseudo-acronym, no longer standing for its original meaning "Personal Communication Systems (PCS) Type Certification Review Board". The PTCRB organization offers its members a certification program for GERAN, UTRAN, and E-UTRAN device certifications, including definition and publication of all related test specifications and methods. PTCRB's certification program covers devices operating in the following technologies and bands:

  • GERAN: 850, 1900 MHz
  • UTRA: FDD Bands 2, 4, 5
  • E-UTRA: FDD Bands 2, 4, 5, 7, 12, 13, 14, 17, 25, 30, 66; TDD band 41

PTCRB has no relevance outside of North America, e.g. in fast-growing European and Asian IoT markets.

PTCRB certification is based on standards published by 3GPPTM, Open Mobile Alliance (OMA), and other standards-developing organizations (SDOs). In numerous cases, PTCRB certification accommodates North American standards and requirements of the United States Federal Communications Commission (FCC), as well as the Canadian Innovation, Science and Economic Development (ISED). By obtaining "PTCRB Certification" on an IoT device, compliance is ensured with cellular network standards within North American mobile operator networks. Accordingly, operators that require PTCRB certification may elect to block any devices from their network that are not PTCRB-certified. CTIA – The Wireless Association is the administrator of the PTCRB Certification process and is responsible to administer PTCRB-issued IMEIs. Additionally, a community of test laboratory organizations working in the field of PTCRB-associated technologies form the PTCRB Validation Group (PVG). The PVG group holds regular meetings to discuss technical issues and resolve problems jointly. Full and Observer membership categories in the PVG reflect the status and scopes of its member organizations. Mobile Network Operators typically require a PTCRB certification as an lab-entry criteria before initiating own operator certification activities. PTCRB Certification is therefore not sufficient for a device to be allowed to be placed on these operators' networks.

For more information on the PTCRB certification process and requirements, please visit their website: https://www.ptcrb.com/

Bridge Alliance

The Bridge Alliance (originally the Bridge Mobile Alliance) is a business alliance of 34 major mobile network operators in Asia, Australia, Africa, and the Middle East. The alliance provides for seamless service connectivity and a unified experience for its members' customers. This includes a suite of integrated value-added, managed services for all subscribers roaming across the alliance's footprint. These allow enterprises to procure, manage, operate, and optimize mobile services with simplicity and transparency.

The Bridge Alliance extends in collaboration to other areas of the world by forming partnerships with other alliances, such as the Global M2M Association (GMA) in Europe, Japan, USA, and Canada. These strategic partners extend the Bridge Alliance's coverage.

For more information, please visit the Bridge Alliance's website: https://www.bridgealliance.com/

Regional Requirements

When developing or selecting IoT devices and components for deployment in specific markets, it is important to consider any regional requirements that may need to be fulfilled. These factors may be critical in order to comply with local and international product security, safety-specific, quality, and legal obligations - affecting manufacturers and/or users.

For example, consider the European market, where the following requirements are mandatory:

  • EU Declaration of Conformity (DOC), a document containing EN harmonized standards and versions that the product complies to;
  • Restriction of Certain Hazardous Substances (RoHS) compliance (can be part of the EU Declaration of Conformity);
  • WEEE Registration for the countries where the hardware will be deployed (e.g., at Stiftung ear for Germany)
  • CE and WEEE labels must be placed on the device, together with the name and address of the EU representative
  • For devices with batteries (especially Lithium batteries), compliance to standards:
    • UL1642 or IEC62133
    • UN 38.3
  • For devices in the automotive segment:
    • ECE-R10 certification through a notified body (e.g., DEKRA)
    • Label must be placed on the device

The following list provides a high-level overview of most regional requirements that must be fulfilled:

Global:

  • GCF

Europe:

  • RoHS, WEEE, CE
  • EAC, GOST-R (Russia)

North America:

  • PTCRB, FCC (US)
  • ISED (Canada)

Middle East / Africa:

  • NTRA (Egypt)
  • CITC (Saudi Arabia)
  • TDRA (UAE)

Latin America:

  • ANATEL (Brazil)
  • IFETEL (Mexico)

Asia-Pacific:

  • ACMA, RCM (Australia)
  • CCC (P.R. China), NCC (Taiwan)
  • GITEKI, JATE (Japan)
  • IMDA (Singapore)
  • KC (South Korea)
  • NBTC (Thailand)

Whenever electronic devices have issued such certifications, it generally means that the product in question has been tested and approved to comply with the corresponding industry or regulatory standard. Such certifications or declarations do not imply that the product is safe or durable, or suitable for your business use under any conditions. Additional testing may be required, and is strongly recommended.

In order to support our customers, we work with world-leading partners to offer consultancy services that help you gain time-to-market, supporting your team with many topics - from IoT business case development and technology workshops, to professional software development and implementation of a tailored, successful global roll-out plan, together with partners like umlaut.

Additionally, we can help you the device and component validation of your IoT solutions, including certification services and IoT security testing that ensures you keep your IoT investments safe.

Regionale Anforderungen

Bei der Entwicklung und Auswahl von IoT-Geräten oder Komponenten für bestimmte Märkte sind regionale Anforderungen zu berücksichtigen, die ein Hersteller oder Inverkehrbringer zu erfüllen hat. Dies ist insbesondere wichtig, um die nationale und internationale Produktsicherheit und Qualität sicherzustellen sowie gesetzliche Verpflichtungen einzuhalten.

Beispielsweise bestehen in Europa folgende Pflichten:

  • EU-Konformitätserklärung: ein Dokument, welches die harmonisierten Standards und deren Versionen beinhaltet, die das Produkt erfüllt
  • Einhaltung der RoHS-Richtlinie (Restriction of Certain Hazardous Substances), kann Bestandteil der EU-Konformitätserklärung sein
  • WEEE-Registrierung für die Länder, in denen das Gerät in Verkehr gebracht wird (z.B. bei der Stiftung ear in Deutschland)
  • CE- und WEEE-Symbole müssen auf dem Gerät zusammen mit dem Namen und der Adresse des Firmensitzes in der EU angebracht sein
  • Geräte mit Batterien, insbesondere Lithium-Batterien müssen folgende Standards einhalten:
    • UL1642 oder IEC62133
    • UN 38.3
  • Für Geräte, die im Automobilumfeld eingesetzt werden:
    • ECE-R10-Zertifizierung durch eine zugelassene Institution (z.B. DEKRA)
    • ECE-Symbol muss auf dem Gerät angebracht sein

Die folgende Liste beinhaltet eine Übersicht über die wichtigsten regionalen Anforderungen, die Produkte erfüllen müssen:

Weltweit:

  • GCF

Europa:

  • RoHS, WEEE, CE
  • EAC, GOST-R (Russland)

Nordamerika:

  • PTCRB, FCC (US)
  • ISED (Kanada)

Afrika/Nahost:

  • NTRA (Ägypten)
  • CITC (Saudi-Arabien)
  • TDRA (Vereinigte Arabische Emirate)

Lateinamerika:

  • ANATEL (Brasilien)
  • IFETEL (Mexiko)

Asien/Ozeanien:

  • ACMA, RCM (Australien)
  • CCC (Volksrepublik China), NCC (Taiwan)
  • GITEKI, JATE (Japan)
  • IMDA (Singapur)
  • KC (Südkorea)
  • NBTC (Thailand)

Wenn elektronische Geräte über derartige Zertifizierungen verfügen, bedeutet das, dass sie erfolgreich gegen die entsprechenden rechtlichen Normen und Industriestandards geprüft wurden. Die Zertifizierungen oder Testierungen implizieren jedoch nicht, dass das Produkt unter allen Umständen sicher, dauerhaft oder für alle Geschäftsanwendungen geeignet ist. Zusätzliche Tests können erforderlich sein und sind empfehlenswert. Um unsere Kunden bei der Qualifizierung von IoT-Produkten zu unterstützen, arbeiten wir mit weltweit führenden Partnern wie bspw. umlaut zusammen und bieten zahlreiche Zertifizierungsleistungen ebenso wie IoT-Sicherheitstests an.

Zudem können unsere Beratungsleistungen in unterschiedlichsten Themenbereichen dabei helfen, den Time-to-Market-Prozess für Ihre Produkte zu verkürzen. Angefangen bei der Entwicklung Ihres IoT Business Cases, über Technologie-Workshops, die professionelle Unterstützung bei der Software-Entwicklung bis hin zur Umsetzung einer auf Ihre Bedürfnisse zugeschnittenen globalen IoT Roll-out Planung – wir unterstützen Sie dabei, dass Ihre IoT-Investitionen nachhaltig und erfolgreich sind!

Securing Interoperability for Customers

We integrate on our component shelf only those solutions which are proven to work well on our networks. By using these certified components, developers, integrators, and service providers can have the peace of mind that their solution will interoperate well. Please review our process documentation under our LEARN-page to find out more about our own operator-specific certification programs for modems (wireless communication chipsets and modules) and other components, such as the antenna, battery, or GNSS solution. You can also read about our governance model for communication efficiency. Taken together, these best practices help secure the highest level of quality and service availability for our customers.

Why should you only use certified components?

Mobile Network Operators (MNOs) deploy highly complex network infrastructures worldwide, each sourced from multiple suppliers. These are usually composed of several sub-system building blocks, such as the Radio Access Network (RAN), the Core Network (CN), IP Multimedia Subsystem (IMS), Value Added Services (VAS), Business Support Systems (BSS), and Operation Support Systems (OSS), among others. They use well-defined interfaces to exchange signaling data, process inbound and outbound traffic, orchestrate complex processes that are necessary to deliver specific essential services to customers, or simply manage operational and administrative procedures. This mirrors a similar situation on the IoT device side, where countless solutions from a vast IoT ecosystem of hardware and software component providers can be integrated. These devices often target disruptive and innovative use cases, usually with proprietary policies and algorithms governing the product’s service enablement and application characteristics.

Yet even though these networks and devices may be designed to work well with each other, issues often arise due to differences in their feature sets and the supported industry standard versions. There is additionally the potential for subtle deviations in suppliers’ interpretations of nuances in connectivity and service enablement protocols. The resulting lack of interoperability between devices and the network can lead to issues of varying severity, ranging from minor issues which can be overlooked, to grave problems affecting the most basic functionalities of the IoT service.

To address this challenge, many MNOs, suppliers, governments, testing companies, and industry alliances have defined certification schemes to secure the highest general interoperability possible. Certifications schemes like the "PCS Type Certification Review Board“ (PTCRB) for North America and the Global Certification Forum (GCF) for the remainder of the world help to set standards for the definition, qualification, and publication of wholistic specifications to prove the interoperability of devices and components with major reference network implementations. Their specifications are iteratively developed to keep apace of technological change, best-fitting to the industry's needs today and tomorrow.

But aren't industry certifications enough?

Unfortunately, not. Mobile Network Operators are aware that a GCF or PTCRB certification does not always guarantee the seamless operation of devices or modems on their own networks. The seemingly endless combinations of network elements, topologies, hardware or software versions make it very difficult to have a cookie-cutter, “certify once for all” approach. This is the reason why operators like us invest heavily to secure interoperability for our customers up-front. By qualifying the interoperability of each modem's 3GPPTM protocol stack against our network, as well as validating highly complex features such as Voice over LTE, we help reduce the cost of product development and time to market for our customers. It also means that we clearly understand the characteristics of each product, and can guide customers and developers with workarounds, known issues, and the best-fit solution for their needs.

Zertifizierte IoT Komponente und Endgeräte

Auf unserer Hardware-Seite finden Sie ausschließlich Lösungen, bei denen wir sicher sind, dass sie in unseren Netzen fehlerfrei funktionieren. Durch die Nutzung qualifizierter Komponenten können sich Entwickler, Integratoren und Service-Anbieter mit der Gewissheit zurücklehnen, dass ihre Lösung einwandfrei mit unseren Netzkomponenten interagieren kann. Schauen Sie hierfür auch gern in unsere Dokumentation unter der WISSEN-Seite und lernen Sie unsere Zertifizierungsprogramme und unser Governance-Modell kennen, über die wir die bestmögliche Qualität und Service-Verfügbarkeit für unsere Kunden sicherstellen.

Warum sollten Sie ausschließlich zertifizierte Komponenten verwenden?

Betreiber von Mobilfunknetzen (MNOs) setzen weltweit hochkomplexe Netzinfrastrukturen ein, die sie von einer Vielzahl von Zulieferern beziehen. Diese Netzinfrastrukturen bestehen aus verschiedenen Teilsystemen wie bspw. das Radio Access Network (RAN), das Core-Netz (CN), das IP Multimedia Subsystem (IMS), Value Added Services (VAS), Business Support Systeme (BSS) und Operation Support Systeme (OSS) um nur einige zu nennen. Um Kunden grundlegende Services bereitzustellen oder operative und administrative Abläufe zu verwalten, erfordern diese Systeme klar definierte Schnittstellen, über die Signalisierungsdaten ausgetauscht, ein- und ausgehende Daten verarbeitet und komplexe Prozesse orchestriert werden. Bei IoT-Geräten ist die Situation sehr ähnlich: es bestehen unzählige Lösungen in einem riesigen IoT-Ökosystems von Anbietern von Hardware und Software-Komponenten. IoT-Geräte zielen hauptsächlich auf disruptive und innovative Use Cases ab. Dabei setzen sie häufig auf proprietären Konzepten und Algorithmen auf, die die Rahmenbedingungen für den Einsatz und die Anwendung des Produkts bestimmen.

Auch wenn die Netze und Geräte derart konzipiert sind, dass sie bestmöglich zusammenarbeiten, treten häufig aufgrund von Unterschieden in den Eigenschaften und den verschiedenen Versionen der unterstützten Industriestandards Unstimmigkeiten auf. Hinzukommt das Risiko, dass Hersteller Konnektivität und Protokolle teilweise leicht abweichend interpretieren. Infolgedessen fehlt es an Interoperabilität zwischen IoT-Geräten und dem Mobilfunknetz. Daraus können sich Probleme mit geringfügigem Schweregrad, die nicht weiter auffallen, ebenso wie schwerwiegende Probleme ergeben, welche die Grundfunktionalität eines IoT-Services stark einschränken.

Um dieser Herausforderung zu begegnen und die größtmögliche Interoperabilität sicherzustellen, haben viele MNOs, Hersteller, Testlabore, Industrieverbände und Regierungen wie bspw. "PCS Type Certification Review Board“ (PTCRB) in Nordamerika und das Global Certification Forum (GCF) ihre eigenen Zertifizierungsprozesse definiert. Diese leisten einen wesentlichen Beitrag, die Definition, Qualifikation und Veröffentlichung von gesamtheitlichen Spezifikationen zu standardisieren und darüber hinaus die Interoperabilität von IoT-Geräten und Komponenten mit den wichtigsten Referenzimplementierungen von Netzen zu gewährleisten. Die Spezifikationen werden kontinuierlich weiterentwickelt, um mit technologischen Änderungen Schritt halten zu können und dadurch die heutigen und zukünftigen Anforderungen der Industrie bestmöglich zu erfüllen.

Reichen Industriezertifikate aus?

Leider nicht. Betreiber von Mobilfunknetzen wissen, dass ein GCF- oder PTCRB-Zertifikat nicht zwangsläufig den reibungslosen Betrieb von IoT-Geräten, Chipsätzen für drahtlose Kommunikation und Modulen in ihren Netzen sicherstellt. Die scheinbar endlose Zahl an Kombinationen von Netzelementen, Topologien, Hardware- oder Software-Versionen erschwert den Ansatz „einmal zertifiziert, alles zertifiziert“ deutlich. Daher setzen sich Netzbetreiber wie wir maßgeblich dafür ein, im Voraus die Interoperabilität für unsere Kunden zu gewährleisten. Indem wir den 3GPPTM-Protokollstack jeder Chipsatz oder jedes Moduls gegenüber unserem Netz qualifizieren sowie hochkomplexe Funktionalitäten wie bspw. Voice over LTE validieren, unterstützen wir unsere Kunden dabei, Produktentwicklungskosten und Time-to-Market zu reduzieren. Im Zuge dessen besteht bei uns ein klares Verständnis für jedes einzelne Produkt, wodurch wir Kunden und Entwicklern Handlungsempfehlungen bei Workarounds, bekannten Fehlern und die bestmöglichen Lösungen für ihre Bedürfnisse mitgeben können.

Ciphering Specifications TLS v1.2 Cipher Suite without PSK, Example 1

The “TLS_RSA_WITH_AES_256_CBC_SHA256 with 4K Cert” is a TLS v1.2 cipher spec for encryption without a pre-shared key (PSK) that can be modeled using the IoT Solution Optimizer. The cipher suite used not only affects the size of some messages, but also can exclude specific messages; for example, no Server Key Exchange is performed when using it. The following payloads are transferred as part of this TLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 194 Bytes
  • Server Hello (DL) – 66 Bytes
  • Server Certificate (DL) – 847 Bytes
  • Server Key Exchange (DL) – Not applicable
  • Server Hello Done (DL) – 9 Bytes
  • Client Key Exchange (UL) – 267 Bytes
  • Client Change Cipher Spec (UL) – 6 Bytes
  • Client Encrypted Handshake Message (UL) – 85 Bytes
  • Server New Session Ticket (DL) – 175 Bytes
  • Server Change Cipher Spec (DL) – 6 Bytes
  • Server Encrypted Handshake Message (DL) – 85 Bytes

The following trace represents this TLS cipher specification, obtained during an analysis using OpenSSL and a 4K certificate. The size of the certificate can vary greatly depending on the key size and the number of the certificate nesting level.

Example trace file:

TLS_RSA_WITH_AES_256_CBC_SHA256 with 4K Cert.docx

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance.

TLS v1.2 Cipher Suite without PSK, Example 2

The “TLS_RSA_WITH_AES_256_CBC_SHA256 with 6K Cert” is a TLS v1.2 cipher spec for encryption without a pre-shared key (PSK) that can be modeled using the IoT Solution Optimizer. The cipher suite used not only affects the size of some messages, but also can exclude specific messages; for example, no Server Key Exchange is performed when using it. The following payloads are transferred as part of this TLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 194 Bytes
  • Server Hello (DL) – 66 Bytes
  • Server Certificate (DL) – 1117 Bytes
  • Server Key Exchange (DL) – Not applicable
  • Server Hello Done (DL) – 9 Bytes
  • Client Key Exchange (UL) – 523 Bytes
  • Client Change Cipher Spec (UL) – 6 Bytes
  • Client Encrypted Handshake Message (UL) – 85 Bytes
  • Server New Session Ticket (DL) – 175 Bytes
  • Server Change Cipher Spec (DL) – 6 Bytes
  • Server Encrypted Handshake Message (DL) – 85 Bytes

The following trace represents this TLS cipher specification, obtained during an analysis using OpenSSL and a 6K certificate. The size of the certificate can vary greatly depending on the key size and the number of the certificate nesting level.

Example trace file:

TLS_RSA_WITH_AES_256_CBC_SHA256 with 6K Cert.docx

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance.

TLS v1.2 Cipher Suite without PSK, Example 3

The “TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 with 4K Cert” is a TLS v1.2 cipher spec for encryption without a pre-shared key (PSK) that can be modeled using the IoT Solution Optimizer. The following payloads are transferred as part of this TLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 194 Bytes
  • Server Hello (DL) – 70 Bytes
  • Server Certificate (DL) – 847 Bytes
  • Server Key Exchange (DL) – 305 Bytes
  • Server Hello Done (DL) – 9 Bytes
  • Client Key Exchange (UL) – 42 Bytes
  • Client Change Cipher Spec (UL) – 6 Bytes
  • Client Encrypted Handshake Message (UL) – 45 Bytes
  • Server New Session Ticket (DL) – 175 Bytes
  • Server Change Cipher Spec (DL) – 6 Bytes
  • Server Encrypted Handshake Message (DL) – 45 Bytes

The following trace represents this TLS cipher specification, obtained during an analysis using OpenSSL and a 4K certificate. The size of the certificate can vary greatly depending on the key size and the number of the certificate nesting level.

Example trace file:

TLS v1.2 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 with 4K Cert

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance.

TLS v1.2 Cipher Suite with PSK, Example 1

The “TLS_PSK_WITH_AES_256_CBC_SHA384” is a TLS v1.2 cipher spec for encryption with a pre-shared key (PSK), that can be modeled using the IoT Solution Optimizer. The following payloads are transferred as part of this TLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 246 Bytes
  • Server Hello (DL) – 66 Bytes
  • Server Certificate (DL) – Not applicable
  • Server Key Exchange (DL) – Not applicable
  • Server Hello Done (DL) – 9 Bytes
  • Client Key Exchange (UL) – 26 Bytes
  • Client Change Cipher Spec (UL) – 6 Bytes
  • Client Encrypted Handshake Message (UL) – 101 Bytes
  • Server New Session Ticket (DL) – 191 Bytes
  • Server Change Cipher Spec (DL) – 6 Bytes
  • Server Encrypted Handshake Message (DL) – 101 Bytes

The following trace represents this TLS cipher specification, obtained during an analysis using OpenSSL.

Example trace file:

TLS v1.2 TLS_PSK_WITH_AES_256_CBC_SHA384

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance.

TLS v1.2 Cipher Suite with PSK, Example 2

The “TLS_DHE_PSK_WITH_AES_256_CBC_SHA” is a TLS v1.2 cipher spec for encryption with a pre-shared key (PSK), that can be modeled using the IoT Solution Optimizer. The following payloads are transferred as part of this TLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 246 Bytes
  • Server Hello (DL) – 66 Bytes
  • Server Certificate (DL) – Not applicable
  • Server Key Exchange (DL) – 785 Bytes
  • Server Hello Done (DL) – 9 Bytes
  • Client Key Exchange (UL) – 412 Bytes
  • Client Change Cipher Spec (UL) – 6 Bytes
  • Client Encrypted Handshake Message (UL) – 73 Bytes
  • Server New Session Ticket (DL) – 191 Bytes
  • Server Change Cipher Spec (DL) – 6 Bytes
  • Server Encrypted Handshake Message (DL) – 73 Bytes

The following trace represents this TLS cipher specification, obtained during an analysis using OpenSSL.

Example trace file:

TLS v1.2 TLS_DHE_PSK_WITH_AES_256_CBC_SHA

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance.

TLS v1.2 Cipher Suite with PSK, Example 3

The “TLS_RSA_PSK_WITH_AES_256_CBC_SHA384” is a TLS v1.2 cipher spec for encryption with a pre-shared key (PSK), that can be modeled using the IoT Solution Optimizer. The following payloads are transferred as part of this TLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 246 Bytes
  • Server Hello (DL) – 66 Bytes
  • Server Certificate (DL) – 847 Bytes
  • Server Key Exchange (DL) – Not applicable
  • Server Hello Done (DL) – 9 Bytes
  • Client Key Exchange (UL) – 284 Bytes
  • Client Change Cipher Spec (UL) – 6 Bytes
  • Client Encrypted Handshake Message (UL) – 101 Bytes
  • Server New Session Ticket (DL) – 191 Bytes
  • Server Change Cipher Spec (DL) – 6 Bytes
  • Server Encrypted Handshake Message (DL) – 101 Bytes

The following trace represents this TLS cipher specification, obtained during an analysis using OpenSSL.

Example trace file:

TLS v1.2 TLS_RSA_PSK_WITH_AES_256_CBC_SHA384

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance.

DTLS v1.2 Cipher Suite without PSK, Example 1

The “TLS_RSA_WITH_AES_256_CBC_SHA256 with 4K Cert” is a DTLS v1.2 cipher spec for encryption without a pre-shared key (PSK) that can be modeled using the IoT Solution Optimizer. The cipher suite used not only affects the size of some messages, but also can exclude specific messages; for example, no Server Key Exchange is performed when using it. The following payloads are transferred as part of this DTLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 228 Bytes
  • Server Hello (DL) – 82 Bytes
  • Server Certificate (DL) – 855 Bytes
  • Server Key Exchange (DL) – Not applicable
  • Server Hello Done (DL) – 25 Bytes
  • Client Key Exchange (UL) – 275 Bytes
  • Client Change Cipher Spec (UL) – 14 Bytes
  • Client Encrypted Handshake Message (UL) – 93 Bytes
  • Server New Session Ticket (DL) – 191 Bytes
  • Server Change Cipher Spec (DL) – 14 Bytes
  • Server Encrypted Handshake Message (DL) – 93 Bytes

The following trace represents this DTLS cipher specification, obtained during an analysis using OpenSSL and a 4K certificate. The size of the certificate can vary greatly depending on the key size and the number of the certificate nesting level.

Example trace file:

DTLS v1.2 TLS_RSA_WITH_AES_256_CBC_SHA256 with 4K Cert

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance.

DTLS v1.2 Cipher Suite without PSK, Example 2

The “TLS_RSA_WITH_AES_256_CBC_SHA256 with 6K Cert” is a DTLS v1.2 cipher spec for encryption without a pre-shared key (PSK) that can be modeled using the IoT Solution Optimizer. The cipher suite used not only affects the size of some messages, but also can exclude specific messages; for example, no Server Key Exchange is performed when using it. The following payloads are transferred as part of this DTLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 228 Bytes
  • Server Hello (DL) – 82 Bytes
  • Server Certificate (DL) – 1125 Bytes
  • Server Key Exchange (DL) – Not applicable
  • Server Hello Done (DL) – 25 Bytes
  • Client Key Exchange (UL) – 531 Bytes
  • Client Change Cipher Spec (UL) – 14 Bytes
  • Client Encrypted Handshake Message (UL) – 93 Bytes
  • Server New Session Ticket (DL) – 191 Bytes
  • Server Change Cipher Spec (DL) – 14 Bytes
  • Server Encrypted Handshake Message (DL) – 93 Bytes

The following trace represents this DTLS cipher specification, obtained during an analysis using OpenSSL and a 6K certificate. The size of the certificate can vary greatly depending on the key size and the number of the certificate nesting level.

Example trace file:

DTLS v1.2 TLS_RSA_WITH_AES_256_CBC_SHA256 with 6K Cert

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance.

DTLS v1.2 Cipher Suite without PSK, Example 3

The “TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 with 4K Cert” is a DTLS v1.2 cipher spec for encryption without a pre-shared key (PSK) that can be modeled using the IoT Solution Optimizer. The following payloads are transferred as part of this DTLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 228 Bytes
  • Server Hello (DL) – 86 Bytes
  • Server Certificate (DL) – 855 Bytes
  • Server Key Exchange (DL) – 313 Bytes
  • Server Hello Done (DL) – 25 Bytes
  • Client Key Exchange (UL) – 58 Bytes
  • Client Change Cipher Spec (UL) – 14 Bytes
  • Client Encrypted Handshake Message (UL) – 61 Bytes
  • Server New Session Ticket (DL) – 191 Bytes
  • Server Change Cipher Spec (DL) – 14 Bytes
  • Server Encrypted Handshake Message (DL) – 61 Bytes

The following trace represents this DTLS cipher specification, obtained during an analysis using OpenSSL and a 4K certificate. The size of the certificate can vary greatly depending on the key size and the number of the certificate nesting level.

Example trace file:

DTLS v1.2 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 with 4K Cert

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance.

DTLS v1.2 Cipher Suite with PSK, Example 1

The “TLS_PSK_WITH_AES_256_CBC_SHA384” is a DTLS v1.2 cipher spec for encryption with a pre-shared key (PSK), that can be modeled using the IoT Solution Optimizer. The PSK handling is defined in the RFC4279 specification. The following payloads are transferred as part of this DTLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 263 Bytes
  • Hello Verify Request (DL) – 48 Bytes
  • Client Hello (UL) – 283 Bytes
  • Server Hello (DL) – 82 Bytes
  • Server Certificate (DL) – Not applicable
  • Server Key Exchange (DL) – Not applicable
  • Server Hello Done (DL) – 25 Bytes
  • Client Key Exchange (UL) – 42 Bytes
  • Client Change Cipher Spec (UL) – 14 Bytes
  • Client Encrypted Handshake Message (UL) – 109 Bytes
  • Server New Session Ticket (DL) – 223 Bytes
  • Server Change Cipher Spec (DL) – 14 Bytes
  • Server Encrypted Handshake Message (DL) – 109 Bytes

The following trace represents this DTLS cipher specification, obtained during an analysis using OpenSSL.

Example trace file:

DTLS v1.2 TLS_PSK_WITH_AES_256_CBC_SHA384

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance. Additionally, only one pre-shared key was used in the test. If pre-shared key dictionary is used in the real application, the payloads are a little different.

DTLS v1.2 Cipher Suite with PSK, Example 2

The “TLS_DHE_PSK_WITH_AES_256_CBC_SHA” is a DTLS v1.2 cipher spec for encryption with a pre-shared key (PSK), that can be modeled using the IoT Solution Optimizer. The DHE_PSK handling is defined in the RFC4279 specification. The following payloads are transferred as part of this DTLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 263 Bytes
  • Hello Verify Request (DL) – 48 Bytes
  • Client Hello (UL) – 283 Bytes
  • Server Hello (DL) – 82 Bytes
  • Server Certificate (DL) – Not applicable
  • Server Key Exchange (DL) – 802 Bytes
  • Server Hello Done (DL) – 25 Bytes
  • Client Key Exchange (UL) – 428 Bytes
  • Client Change Cipher Spec (UL) – 14 Bytes
  • Client Encrypted Handshake Message (UL) – 81 Bytes
  • Server New Session Ticket (DL) – 223 Bytes
  • Server Change Cipher Spec (DL) – 14 Bytes
  • Server Encrypted Handshake Message (DL) – 81 Bytes

The following trace represents this DTLS cipher specification, obtained during an analysis using OpenSSL.

Example trace file:

DTLS v1.2 TLS_DHE_PSK_WITH_AES_256_CBC_SHA

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance. Additionally, only one pre-shared key was used in the test. If pre-shared key dictionary is used in the real application, the payloads are a little different.

DTLS v1.2 Cipher Suite with PSK, Example 3

The “TLS_RSA_PSK_WITH_AES_256_CBC_SHA384” is a DTLS v1.2 cipher spec for encryption with a pre-shared key (PSK), that can be modeled using the IoT Solution Optimizer. The RSA_PSK handling is defined in the RFC4279 specification. The following payloads are transferred as part of this DTLS handshake procedure (whereby UL = Uplink, DL = Downlink):

  • Client Hello (UL) – 263 Bytes
  • Hello Verify Request (DL) – 48 Bytes
  • Client Hello (UL) – 283 Bytes
  • Server Hello (DL) – 82 Bytes
  • Server Certificate (DL) – 863 Bytes
  • Server Key Exchange (DL) – Not applicable
  • Server Hello Done (DL) – 25 Bytes
  • Client Key Exchange (UL) – 300 Bytes
  • Client Change Cipher Spec (UL) – 14 Bytes
  • Client Encrypted Handshake Message (UL) – 109 Bytes
  • Server New Session Ticket (DL) – 223 Bytes
  • Server Change Cipher Spec (DL) – 14 Bytes
  • Server Encrypted Handshake Message (DL) – 109 Bytes

The following trace represents this DTLS cipher specification, obtained during an analysis using OpenSSL.

Example trace file:

DTLS v1.2 TLS_RSA_PSK_WITH_AES_256_CBC_SHA384

Please note that different service provider clouds and IoT devices have different extensions and number of supported cipher suites. This will also affect the size of other messages such as Client Hello/Server Hello, for instance. Additionally, only one pre-shared key was used in the test. If pre-shared key dictionary is used in the real application, the payloads are a little different.

DTLS v1.2, no Maximum Fragment Length

With the "ECDHE_PSK_WITH_AES_128_CBC_SHA256" DTLS v1.2 cipher spec with pre-shared key (PSK), it is possible to omit the maximum fragment length. Both the Verify Request and session ticket are maintained. This is a leaner process than when using a full DTLS PSK handshake.

Due to its more modest size, this cipher suite is referred to in the IoT Solution Optimizer as "DTLS v1.2 (Medium)."

Figure: DTLS Procedure without Maximum Fragment Length

DTLS v1.2, no Maximum Fragment Length, Verify Request, Session Ticket

With the "ECDHE_PSK_WITH_AES_128_CBC_SHA256" DTLS v1.2 cipher spec with pre-shared key (PSK), it is possible to omit not only the maximum fragment length, but also the Verify Request, and session ticket. This is the leanest process avaiable when compared to a full DTLS PSK handshake.

Due to its simplicity, this cipher suite is referred to in the IoT Solution Optimizer as "DTLS v1.2 (Light)."

Figure: DTLS Procedure with no Maximum Fragment Length, Verify Request, Session Ticket

Cipher Suites

Cipher suites are a collection of security algorithms that protect communication between clients and servers, often consisting of a key exchange algorithm, a bulk encryption algorithm, and a Message Authentication Code (MAC) algorithm:

  • The key exchange algorithm exchanges a cryptographic key between two endpoints, for the purpose of encrypting and decrypting messages exchanged between these.
  • The bulk encryption algorithm additionally encrypts the data being sent in those messages.
  • A MAC algorithm offers data integrity checks which qualify that all data exchanged is not altered during its transit between endpoints.
  • Finally, a signature with authentication algorithm may be used for client and/or server authentication.

Cipher suites are typically used within transport encryption protocols such as Transport Layer Security (TLS) for TCP, or Datagram Transport Layer Security (DTLS) for UDP and Non-IP based communication. These define the structure and use of cipher suites within their operation.

There are hundreds of cipher suites containing different combinations of these algorithms above. As such, certain cipher suites may offer higher security than others. A reference list of named cipher suites is provided in the TLS Cipher Suite Registry. As seen in this online registry, each cipher suite uses a unique, segmented name that summarizes the specific combinations of algorithms employed, whereby each segment represents a different algorithm or protocol used.

"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" consists of:

  • [TLS]: The lead segment shows the protocol that this cipher suite is used with
  • [ECDHE]: The key exchange algorithm used is presented in this segment
  • [RSA]: This segment indicates the authentication mechanism used during the handshake
  • [AES]: The session cipher used is presented in this segment
  • [128]: This segment contains the session encryption key size (in bits) for the cipher
  • [GCM]: This segment indicates the type of encryption (cipher-block dependency and additional options)
  • [SHA]: This segment shows the (SHA2) hash function, with the signature mechanism and message authentication algorithm used to authenticate messages
  • [256]: This segment has the digest size (in bits)

Please note that DTLS cipher suites also include the identifier "TLS" in the first segment of their names. A distinction can made by simply setting the flag "DTLS-OK" within the TLS parameter registry, or by including signatures and an authentication algorithm for servers and clients supporting DTLS.

One important aspect to consider is that cipher suites' encryption, key exchange, and authentication algorithms do require significant processing power and memory on the device. This may pose a challenge to constrained devices, such as those used in NB-IoT and LTE-M projects. As such, the careful selection of a suitable cipher suites is prudent task.