Digital Key – The Future of Vehicle Access

The Car Connectivity Consortium (CCC) announced in 2020 its Digital Key Release 2.0 finalized specification.
The Car Connectivity Consortium (CCC) Digital Key is a standardized ecosystem that enables mobile devices to store, authenticate, and share Digital Keys for vehicles in a secure, privacy-preserving way that works everywhere.

  1. Introduction into the Digital Key concept

The Digital Key Release 2.0 specification leverages Near Field Communication (NFC) technology for contactless communication between smartphones and vehicles, and supports a scalable architecture for mass adoption while reducing costs.
Vehicle owners will be able to leverage Digital Key Release 2.0 for the following capabilities:
Security and privacy equivalent to physical keys
• Interoperability and user experience consistency across mobile devices and vehicles
• Vehicle access, start, mobilization, and more
• Owner pairing and key sharing with friends, with standard or custom entitlement profiles
• Support for mobile devices in Battery Low Mode, where normal device operation is disabled
The Digital Key, Release 2.0 specification was designed also to meet vehicle manufacturer requirements for use and to form the basis of developing the future releases that will continue to expand the capability, ease of use, and convenience of mobile vehicle access.

  • Digital Key Architecture Overview

The Digital Key ecosystem consists of vehicles, Vehicle OEM Servers, mobile devices, and Mobile Device OEM Servers communicating with one another using a combination of standardized and proprietary interfaces.
Standardized interfaces ensure interoperability between the implementations of mobile device manufacturers (Mobile Device OEMs) and vehicle manufacturers (Vehicle OEMs)
Mobile devices may act as either owner or friend devices, but the vehicle-to-device interface is the same in either role.
There are 4 types of communication:
• Vehicle – Device
• Vehicle – Vehicle OEM Server
• Device – Device OEM Server
• Vehicle OEM Server – Device OEM Server
In this way is possible to realize aka MFA (Multi Factor Authentication) based on agreemen between device manufactures and automotive manufactures in a simple fast and trusted way

  • Digital Key Future:

In the near future is expected Digital Key 3.0 to be available.
Digital Key Release 3.0 Adds Passive Keyless Access Capabilities. The Digital Key Release 3.0 specification will enhance Digital Key Release 2.0 by adding passive, location-aware keyless access. Rather than having to pull their mobile devices out to access a car, consumers will be able to leave their mobile device in their bag or pocket when accessing and/or starting their vehicle. Passive access is not only vastly more convenient and a better overall user experience, it also allows vehicles to offer new location-aware features.
To support these new features, the CCC is developing a specification based on Bluetooth Low Energy (BLE) in combination with Ultra-Wideband (UWB) to enable passive keyless access and to allow secure and accurate positioning.
Continue reading..

Car Connectivity Consortium Digital Key:

Whitepaper Digital Key – The Future of Vehicle Access:

Posted in Uncategorized | Tagged , , , , | 2 Comments

IoT Platforms, Powered by Digital Twins

Digital Twins become one of the most powerful technologies, which can empower modern  IoT and AI solutions to reach the bring additional value for the business and and society.

In my previous article  Essential Digital Twins for Modern Industry we considered the concepts, main use-cases, business value and applied areas where Digital Twins can be implemented. For any implementation it is important to know the main platforms, offering solutions based on Digital Twins concept to be able to chose the best solution for specific use case.

In this article will be discussed different solutions with main focus on cloud platforms, which can be used to implement Digital Twins.

A digital twins platform can be considered as a part of analytics platforms, related to big data and artificial intelligence (AI) , applicable for IoT. All it needs is sufficient storage support to save the current status of the model and to manage the life cycle of the digital twin. In other words, digital twins require the model to be tuned initially. They also require the real asset and the digital asset to be synchronized periodically:

Figure 1. The components of digital twins [1]

The platforms that can support digital twins today include Microsoft Azure, AWS, IBM Cloud, Predix, and Google Cloud Platform (GCP).

Let see what these platforms offer for Advanced Analytics with Digital Twins:


AWS IoT Device Shadows

Amazon Web Services (AWS) offers Device Shadow Service for AWS IoT as simplified Digital Twins service, focused on device management (similar to Azure IoT Hub Device Twins)

AWS IoT Device Shadows main features: 

  • Enable Internet-connected devices to connect to the AWS Cloud and let application in the cloud interact with internet-connected devices 
  • Devices report their state by publishing messages in JSON format on MQTT topics  
  • Each MQTT topic has a hierarchical name that identifies the device whose state is being updated

Figure 2. AWS IoT Device Shadows –Cloud 

AWS based analytics with Digital Twins solutions

Actually AWS does not offer Digital Twins Service like SaaS. There are options to create own custom solution using Amazon AI and IoT Core services in integration with services for 3D modelling (remember – it is not ready solution and need custom development and configuration).

AWS recommends that we use SageMaker as the main platform for ML. With SageMaker, we can define our model to train parameters and hyper-parameters. We can also store the model to deploy it later either on the cloud or on-premise. The basic idea of SageMaker is to train the model that is coding it on Jupyter and later deploy the model as a microservice to get the results during normal operation. A high level of computation is required during training.

One possible implementation  cold be a custom solution, including AWS IoT Core, Sumerian and Sage Maker

Figure 3 Sumerian features [2]

Figure 4. Sumerian integration [2]

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.

Figure 5. SageMaker components [3]


Predix (known as Predix Platform is an industrial IoT software platform from GE Digital)  implements digital twins in a very similar way to SageMaker. It builds analytics as microservices that are accessible through the REST API. Predix Analytics uses Predix Asset to access metadata about the assets.


Google has all the essential building blocks for developing and deploying IoT solutions in its cloud platform. However, it lacked the glue that helps to connect the  existing services to deliver an end-to-end device management and data processing pipeline.

Cloud IoT Core

Google exposes an industry-standard MQTT broker and a device registry to onboard connected devices and sensors. The device registry acts as a central repository of all the devices connected to the platform. It contains device metadata such as serial number, make, model, location, asset id and more. Applications can query the device’s  to get the metadata and latest data.

One side of the registry exposes MQTT and REST endpoints while the other end is connected to the Cloud Pub/Sub service. Devices send messages to each other via secure MQTT or REST endpoints. These messages are delivered to other GCP services through the Pub/Sub topics.

Cloud IoT Core supports industry standard MQTT broker that needs no changes to existing code. Developers familiar with any MQTT client library can target Cloud IoT Core without modifying the code.

Figure 6. Google IoT Platform

Analytics with Digital Twins on Google Cloud Platform

GCP recommends that we use its cloud ML engine with TensorFlow to deploy and train digital twins.

Actually GCP does not offer a specific solution. Similar to AWS solutions, providing analytics with Digital Twins should be based on existing IoT and AI components with custom development and configuration. 

Microsoft Azure

Azure cloud components that can be applied to I-IoT, covering the following:

  • Azure IoT
  • Azure Data Platform
  • Building visualizations with Power BI
  • Time Series Insights
  • Connecting a device with IoT Edge
  • Azure AI 
  • Azure Digital Twins Service 

 Azure IoT 

Azure IoT is a platform proposed by Microsoft to connect multiple devices, enable telemetry, store measures, run and develop analytics, and visualize results. The key components of Azure IoT are the following:he key components of Azure IoT are the following:

  • Azure IoT Hub
  • Azure IoT Edge
  • Stream Analytics
  • Azure Data Lake
  • Data Lake Analytics
  • Time Series Insights. 
  •  Azure AI (ML, Cognitive Services)
  • Power BI
  • Cosmos DB
  • Event Hub
  • IoT Hub DPS
  • Azure Digital Twins Service

Microsoft Azure IoT Hub –Device Twins

Device twins are JSON documents that store device state information including metadata, configurations, and conditions. Azure IoT Hub maintains a device twin for each device that you connect to IoT Hub.

  • Microsoft’ Device Twin is an abstraction of a device state using properties and a set of tags, containing metadata values 
  • Actions and events are not of the model, but are handled by application code 
  • Messages are rather lightweight and the content can be selected by the application down to property level 
  • The format of the messages is defined by applications only 
  •  The Device Twin model does not define a ‘template’ or a mechanism to aggregate multiple devices into a combined device model 

Device Twins Concept.

  • Use device twins to:
  • Store device-specific metadata in the cloud. For example, the deployment location of a vending machine.
  • Report current state information such as available capabilities and conditions from your device app. For example, a device is connected to your IoT hub over cellular or WiFi.
  • Synchronize the state of long-running workflows between device app and back-end app. For example, when the solution back end specifies the new firmware version to install, and the device app reports the various stages of the update process.
  • Query your device metadata, configuration, or state. 

Figure 7. Device Twins and  Azure IoT Hub [5] 

The Azure IoT Hub registry:

Store key/value flat data to the device 


  • CRM Key 
  • Spare Part Key 

Figure 8.. IoT Hub Device Registry

Figure 9. Azure IoT Hub vs AWS IoT 

Azure Digital Twins Service

Microsoft Azure – the Microsoft Public cloud  offers Azure Digital Twins Service (still in preview), which simplifies a lot the implementation of solutions, based on Digital Twins concept. It provides a specific extendable object model, focused on problems in building industry. This is the only one public cloud providing complete SaaS Digital Twins Currently, which can be used very easily without expensive additional development, integration and deep knowledge on several different technologies.  

Digital Twins object models describe domain-specific concepts, categories, and properties. Models are predefined by users who want to tailor the solution to their specific needs. Together, these predefined Digital Twins object models make up an ontology. A smart building’s ontology describes regions, venues, floors, offices, zones, conference rooms, and focus rooms. An energy grid ontology describes various power stations, substations, energy resources, and customers. With Digital Twins object models and ontologies, diverse scenarios and needs can be customized.

With Digital Twins object models and an ontology in place, you can populate a spatial graph. Spatial graphs are virtual representations of the many relationships between spaces, devices, and people that are relevant to an IoT solution. This diagram shows an example of a spatial graph that uses a smart building’s ontology

Azure Digital Twins important features:

  • Twin object models 
  • Spatial intelligence graph 
  • Advanced compute capabilities 
  • Data isolation via multi-and nested-tenancy capabilities 
  • Security through access control and Azure AD 
  • Integration with other services 

Azure Digital Twins place in the whole Azure IoT Stack is described with the schema below 

Figure 10. Azure Digital Twins and Azure IoT [6]

 Digital Twins object model

  • Spaces 
  • Devices 
  • Sensors 
  • Users
  • Other categories: 
  • Resources, Extended Types, Ontologies, Property Keys and Values, Roles/Roles Assignments, Security Key Stores, UDFs, Matchers, Endpoints

Figure 11: Azure Digital Twins Object Model [7] 

Azure Digital Twins Graph Viewer:

One very important feature is the ability very easy to visualize and edit digital twins models. 


Azure Digital Twins Graph Viewer is an OSS project, hosted in GitHub. It is used to manage and visualize your digital space:

The solution supports:

Figure 12: Azure Digital Twins Graph Viewer 

 Azure Digital Twins Service in Modern IoT Solutions

There are many areas in IoT solutions, where Digital Twins technology can be used:

  • Device Configuration Management System
  • Device monitoring
  • Device provisioning
  • Software update
  • Multi-tenancy
  • Integration with other solution
  • Raise alarms (event driven design)

Device Twin vs Digital Twin 

Twin is a cloud based representation of something that is remote 

  • Device Twin is 
    • A key/value flat representation of 
    • Desired configuration 
    • Reported configuration v
    • Keys to match to an external database
  • Digital Twin is 
    • A graph (tree…someone asking for graph) 
    • Richer semantics 
    • Also not only devices


Providers of modern IoT solutions use digital twins mostly in 2 different ways:

  • Simplified option for device state management (Device Twins, Device Shadows)
  • Fully functional Digital Twins solutions for advanced analytics 
    • From major cloud providers nowadays on Microsoft offers ready to use SaaS for advanced analytics: Azure Digital Twins Service.
    • Other providers offer options to build custom solutions with significant additional effort for development and configuration
  • This overview is focused on the major providers of cloud services excluding some specific solutions from companies focused mostly on Digital Twins.

Expecting soon the next article on Digital Twins where will have a deep dive into Azure Digital Twins service.


  1. Veneri, G. and Capasso, A., n.d. Hands-On Industrial Internet Of Things.
  2. Masat, M., 2020. Create Digital Twins Using AWS Iot Core And Amazon Sumerian. [online] AWS re:Invent 2019. Available at: <; [Accessed 28 March 2020].
  3. Amazon Web Services, Inc. 2020. Amazon Sagemaker. [online] Available at: <; [Accessed 28 March 2020].
  4.  MSV, J., 2020. Google Cloud Iot Core Focuses On Simplicity And Scale. [online] Forbes. Available at: <; [Accessed 28 March 2020].
  5. 2020. Understand Azure Iot Hub Device Twins. [online] Available at: <; [Accessed 28 March 2020].
  6. 2020. Overview – Azure Digital Twins. [online] Available at: <; [Accessed 29 March 2020].
  7. 2020. Understand Object Models And Spatial Intelligence Graph – Azure Digital Twins. [online] Available at: <; [Accessed 29 March 2020].


If you want more information feel free to contact me at Follow my blog : .

You can learn more also if you follow me on Twitter @mihailmateev , and stay in touch on Facebook, Google+, LinkedIn and Bulgarian BI and .Net User Group !

Posted in Advanced Analytics, AI, AWS, AWS IoT Core, Azure, Azure Digital Twins, Azure Digital Twins Service, Azure IoT, Device Provisioning, Device Shadows, Device Twins, Digital Twins, GCP, GCP IoT Core, Internet of Things, IoT, IoT Hub, Microsoft Azure, Uncategorized | Tagged , , , , , , | 1 Comment

Essential Digital Twins for Modern Industry


Digital Twins concept stay more and more popular last several year and has different points of view in modern industry.
In a set of articles I will try to explain the digital twins concept, business cases, main platforms and after that will go deep into 
Digital Twins for IoT, AI and especially Azure Digital Twins Service – the Microsoft Digital Twins PaaS, realized on Microsoft Azure

State-of-the-art technologies such as the Internet of Things (IoT), cloud computing (CC), big data analytics (BDA), and artificial intelligence (AI) – all part of Industry 4.0 are one of the main focus of modern  business.
Digital Тwin is a significant enabler for Industry 4.0 and especially for Internet of Things related initiatives.

Although digital twins have been around for several decades, the rapid rise of the internet of things (IoT) is that they have become more widely considered as a tool of the future. Digital twins are getting attention because they also integrate things like artificial intelligence (AI) and machine learning (ML) to bring data, algorithms, and context together.

History of Digital Twin

Despite that the terminology has changed over time, the basic concept of the Digital Twin model has remained fairly stable from its inception in 2002. It is based on the idea that a digital informational construct about a virtual or a physical system could be created as an entity on its own. This digital information would be a “twin” of the information that was embedded within the virtual or physical system itself and be linked with that system through the entire lifecycle of the system[1].

This conceptual model was used in the first executive Product Lifecycle Management Systems (PLM) courses at the University of Michigan in early 2002, where it was referred to as the Mirrored Spaces Model[1].  

Figure 1. Dr. Michael Grieves, University of Michigan, Lurie Engineering Center, Dec, 3, 2001 [1]

Digital Twins is represented as a concept for PLM systems in the beginning of 21 century, but it is now achieving real value and presence in the industrial space. In fact, it is now recognized as a key part of the Industry 4.0 roadmap.

One of the primary reasons digital twin technology is rapidly being adopted is there are multiple use cases across the industrial enterprise: engineering, manufacturing and operations, and maintenance and service. Digital twins are made possible (and improved) by a multitude of Industry 4.0 technologies – IoT, AR, CAD, PLM, AI, edge computing, to name a few – to create a powerful tool that’s driving business value.[3]

What is Digital Twin?

Digital Twin is the exact representation of, for example, a building as digital data. One example could be a database that knows everything that happened during the construction phase of a building, like:

  1. A timeline of every status change reported for all activities executed to deliver the project.
  2. Who reported them?
  3. Issues and Obstructions that needed to be faced during the construction process.
  4.  When those have been resolved and by whom.

Figure 2. Digital Twin Model, described with Sablono [4].

Such a database should also be considered a Digital Twin. Sure, since it really is the exact representation of the buildings construction phase after all.

Digital Twins and Existing Technologies for Information Modelling 

Nowadays, there are many different approaches for information modelling in specific business domains. Most of existing technologies are focused on modelling of physical and virtual systems, but not to be a “replica” of the system during the maintenance, which also can be updated to have identical behavior as the original systems. One typical example is Building Information Modelling (BIM).

BIM vs. Digital Twins

  1. BIM Is For Design and Construction
  2. BIM Isn’t Designed for Real-Time Operational Response
  3. BIM Focuses on Buildings Rather Than People
  4. Digital Twin can give you information about the current state of build subsystems
  5. Digital Twin a model that evolves over time to deliver more value with each new stage of the asset’s lifecycle
  6. In the future, Digital Twin will certainly supersede BIM software even at the design and build phase of an asset’s lifecycle.



Figure 3. BIM vs. Digital Twins


Essential components to create a digital twin of building 

Figure 4. Essential components to create a digital twin of building and difference with BIM


Components of BIM and digital twin for buildings

Figure 5. A detailed comparison of BIM and digital twin of building


Digital Twins vs, Simulation Models

It is essential to understand the difference between a digital twin and simulation models, which are often used in healthcare, fin-tech and engineering.

There is a difference between a digital twin and simulation models, which have been used for decades, and may use the same type of sensor data, though not always. But simulations generate and manipulate data as part of the simulation. The whole point of a simulation is to project what CAN happen, not what is happening at the moment.


Industry areas for application of Digital Twins 

The digital twins concept is applicable for probably most of the industry areas for modelling of physical or virtual systems

  • Manufacturing Industry (all sub-areas)
  • Healthcare industry
    • Modelling of parts of human body
    • Modeling of person in the context of system, which can be analyzed for specific anomalies.
    • Modeling of processes
  • Automotive industry
  • Fin-Tech Industry
  • Logistics
  • Building industry
    • Smart spaces
    • Construction
    • Energy Models for Optimization
    • Materials testing


Digital Twins concept offers also many additional opportunities, like better marketing and sales, based on customer relationships and product maintenance on Digital Twins.

Figure 6. Digital Twins Benefits 

Digital Twins for Healthcare

There are wide options to use Digital Twins for humans: to go beyond gathering and analyzing data, it has the potential to change medicine as we know it by designing a digital model to align the moving parts of a whole system. That applies to both individuals as well as the healthcare system. 

Probably on of the main aim of Digital Twins is to realize Personalized Medicine by identifying deviations from normal. It is questionable how feasible this is at our current level of knowledge. 
This will help to trigger and analyze anomalies related to to specific anomalies, including also symptoms for viruses (for example like COVID-19) which will help very fast and precise to identify specific cases.

From another hand, there are projects to use digital twin techniques for PARTS of the human body, such as the heart.

A digital twin can be defined as a lifelong, rich data record of a person combined with AI-powered models that can ‘interrogate’ the data to answer clinical questions.

All advantages about Digital Twins and healthcare should be considered in the contexts of privacy and regulations.

Digital Twins and Cloud Computing

Design and implementation of Digital Twins is quite challenging because you need to have flexible extensible model, which can trigger different events and to call specific business logic. 

Development of a specific solution for one industry from scratch is too expensive.

There are 2 groups of solutions, focused on the following goals:

  1. A common solution, which can be extended to cover many use cases for different business domains (Microsoft Azure Digital Twins Service) 
  2. Specific solution for group of systems or subsystems like IoT device management (AWS device shadows, Azure IoT Hub Device Twins , Google Cloud IoT Core device registry, Bosch IoT Things)

Solutions, focused on device management and device registry are already well known for IoT solutions.
Digital Twins services for general purpose (like Azure Digital Twins Service) are relative new and offer huge opportunities for mass introduction of Digital Twins in many solutions for all industries.

In the next blogs we will compare Digital Twins solutions of the main software vendors and will have deep dive into Azure Digital Twins Service,


1.(PDF) Origins of the Digital Twin Concept. Available from: [accessed Mar 14 2020].

2. Piascik, R., J. Vickers, D. Lowry, S. Scotti, J. Stewart. and A. Calomino (2010). Technology Area 12: Materials, Structures, Mechanical Systems, and Manufacturing Road Map, NASA Office of Chief Technologist. 

3. P. de Wilde, “Ten questions concerning building performance analysis”, Building and Environment, vol. 153, pp. 110-117, 2019. Available: 10.1016/j.buildenv.2019.02.019 [Accessed 25 November 2019].

4. D. Jung, “Why BIM and Digital Twin Technology shouldn’t be confused”, Sablono – Die Plattform zur Baufortschrittskontrolle, 2019. [Online]. Available: [Accessed: 25- Nov- 2019].

(PDF) Origins of the Digital Twin Concept. Available from: [accessed Mar 14 2020].

If you want more information feel free to contact me at Follow my blog : .

You can learn more also if you follow me on Twitter @mihailmateev , and stay in touch on Facebook, Google+, LinkedIn and Bulgarian BI and .Net User Group !

Posted in AI, Artificial Intelligence, Azure Digital Twins, Azure Digital Twins Service, Digital Twins, Internet of Things, IoT, Solution Architecture | Tagged , , , , | Leave a comment

Azure IoT Solution Architecture Best Practices – IoT Software Update [Part 1]

Software update is one of the major parts of the modern big IoT Solutions. This is probably also one of the most critical and sensitive parts in IoT Solutions. In a series of articles I will try to explain the major approaches i design and implementation of software update for IoT devices.
The first article is focused mostly on the major cases in software update in general and in Azure IoT Solutions  particularly.

Why we need software updates for IoT devices?

Modern IoT Devices can be very different: from very simple embedded devices to very powerful boards, based on different architecture.

Software for IoT devices can be considered in several groups:

  • Device Firmware (that provides the low-level control for the device’s specific hardware)

Application Software in 2 groups:

  • Software packages (groups of files and information about the software)
  • Visualized applications (contains groups of containers)

Nowadays it is realistic to have not only back-end services, implemented as containers, but also to have containers, running on the devices.

Containers are well-positioned to address some of the main challenges that developers face when deploying software to IoT devices:

  • Minimal hardware resources
  • Widely varying device environments
  • Limited  network access (containers, especially Docker allow incremental updates)

In the embedded devices it is possible to configure container engine and run containers, complying to the OCI Image Format or Docker Image Specification .

Main components of software update for IoT devices:

Modern IoT devices software update systems have several components, which can be considered on the conceptual level, agnostic to the type of the software

  • Repository
  • Metadata in IoT Device Management System DB
  • Software to manage downloads, installed on IoT Devices
  • Secure communication, used to download packages
  • Synchronization between  devices and back-end about upcoming updates.
  • Software Bundling (grouping the updated in “bundles”)
  • Campaign Management (managing updates by pool of devices, to be possible to manage the workload when process the software update)

1. Repository

Repository for software update can we different:
from Debian package repository like APTLY    , multi-package repositories like JFrog  Artifactory  or repository for containers like Docker Registry . Quite popular are container registries, implemented as SaaS for different cloud providers like Azure Container Registry.

2. Metadata on software packages (containers)

Metadata can be stored in several places:

  • Repository integrated metadata
  • Packages/ containers metadata
  • Metadata, persisted in database (usually part of IoT device management system)

3. Software to manage updates:
it could be various: from extended standard Linux commands like apt-get to specific applications to manage the whole download process.

Below are listed several most used options:

  • Embedded OS tools
  • General systems for upload/download like FTP client/server
  • Embedded features in the Gateway Service (IoT Hub for Microsoft Azure)
  • Embedded options for container repository (for example Azure Container Registry [ACR]).
  • Custom Solutions

If the IoT devices run Linux-based OS commands like apt-get could be used to download packages from different repositories.
The general consideration is related to the security and flexibility to extend these tools.

Solutions with FTP server are possible option, but it depends on the current implementation and configuration. Possible issues are again security, scalability, high availability, multi-tenancy.

IoT Gateways like IoTHub can support embedded features to transport updates: It could be using messaging with additional logic to split and join the package / container parts, because of the message size limitations (256KB for IoTHub).
There are some new feature like IoTHub Device Streams (in preview), considered mostly for remote monitoring which can be used for software update if there is not very high number of devices per IoTHub .

IoTHub Device Streams

Container registries like Docker Registry (Azure Container Registry (ACR) for Azure) support build-in functionalities for upload/download containers with required security and RBAC.

Custom solutions: this approach includes any kind of custom implementation (for example Restful services), which can be used to download software updates.

4. Secure communication: secure communication includes several aspects:

  • Communication protocol
  • Security access to resources in the repository, usually based on RBAC (role based access control)
  • Security on the package / container level – validation f the files identity (digitally signing the files, or validating the checksum)

Software update concept:

There are two common concepts, related to software download when you have solution which need to update the software on IoT Devices:

  • Software update using a separate communication channel than the major one, used for messages via Gateway service (Azure IoT Hub).
  • Software update design when you have a packages transfer via the gateway service (IoT Hub.)

The first approach requires to configure a separate endpoint for download the packages from the back-end.
This endpoint need to be secured and configured separately . Pure download implementation is simpler, but the solution needs to support different channels and separate configuration for download and messaging.

Software update concept using a separate communication channels for messages and for software update packages and containers.

To do firmware update using only what’s available publicly today in IoT Hub, you’d probably start by sending a cloud-to-device message to your device. Of course, exactly how you go about downloading and updating the firmware after that is specific to your device and scenario.
This approach is more clear from the architectural point of view, because only one communication channel is required, but it needs additional logic to split and merge updates.

5. Synchronization between  devices and back-end about upcoming updates.

The most often used approach is with so named “current configuration” and “desired configuration”.

  • Desired configuration is the planned state after the software update
  • Current configuration is the current state of the software on the device.

This synchronization can be done using direct messages between devices and the back-end or with “device-twins”: configurations, which are uploaded asynchronously  into the Gateway Service (IoTHub)  and both sides: devices and the back-end can check the available configurations with no need to send direct messages.


Device twins concept

6. Software Bundling: this is the approach to group updates on the logical (sometimes also in the physical) level into so named “software bundles. 

7. Campaign Management
To have the full coverage of the concept of software update we need to discuss also non-functional requirements like performance, scalability , availability etc. and the related to these requirements functionalities like campaign management, which we will cover in some of the next articles.

If you want more information feel free to contact me at Follow my blog :  .

You can learn more also if you follow me on Twitter @mihailmateev , and stay in touch on FacebookGoogle+LinkedIn and Bulgarian BI and .Net User Group !

Posted in Azure IoT, IoT, IoT Device Software Update, IoT Solutions, IoTHub, Microsoft Azure | Tagged , , , , , , , , , , , | Leave a comment

Azure IoT Solution Architecture Best Practices – IoT Devices Provisioning

In the modern IoT Solutions often we have a big number of devices, connected to the back end: thousands hundred thousands or even millions of devices.
All these devices should have initial configuration and to be successfully connected to the back-end and this configuration should be done automatically.

In a sequence of article there will be described the main concepts of automatic provisioning and best practices how to implement it (focusing more on Azure based IoT solutions).


If devices are configured in advance during the manufacturing it will be:

1. very difficult to know in advance the connection settings during the manufacturing. For big solutions it is possible to know the customer, but it is difficult to know details about exact connection in advance.

2. Setting the connection in advance is not secure. This approach will give options if somebody know the settings to initiate connection from a fake device and to compromise the system.

3. In the modern IoT solutions it is recommended to have implemented MFA (multi factor authentication). 

To be possible to have zero touch provisioning is needed to have pre-set of the devices from the back-end.

This initial configuration includes 2 groups of settings:

  • Settings, related to the exact connection of the devices to the back-and : end point and authentication
  • Settings, related to the specific IoT Solution business logic as logical grouping of the devices, which could be done from the back-end 

The first group of settings needs communication between back-end and devices to let them know it’s connection and to provide authentication.

How is possible devices to have automated configuration? The most common approach is to have a common “discovery” service for the whole solution.
This service will not be changed and devices will connect to this service to receive its settings.

To provide MFA for device registration information on manufactured devices with their specific properties should be imported in advance in the system. This import should be based on the information, provided from the device’s manufacturer. It will be not possible somebody non-authenticated from outside to provide this production data import.

Authentication of devices for initial provision should be provided using 2 independent channels (MFA). Usually those c channels are:

  • Import of the manufactured device data
  • Authentication of devices to establish communication channel, used for provisioning

Authentication of devices to establish connection for initial  provisioning can be done in several ways:

  • Basic authentication (not recommended)
  • Integrated authentication using identity management system like Microsoft Active Directory
  • Secure attestation using .X.509 or TPM-based identities

Using authentication with specific credentials brings more complexity and more risk. Most of the modern IoT solutions are designed to use secured connection, based on certificates in different varieties. 

The both parts of the authentication are 2 sequential parts of the provisioning process.

  • Manufacturing step.
  • Solution setup step (back-end setup and device initiated provisioning).

Device provisioning with CA

Fig.1 Provisioning based on manufactured devices data and certificates.

Manufacturing step contains:

  • Base software install and setup.
  • Configuring pre-requisites for device provisioning: global endpoint, certificate etc.

The back-end setup includes:

  • Importing information about devices to the back-end system.
  • Integration with PKI (importing of public certificates in the IoT Solution) – when certificate based security is implemented
  • Integration with identity management system – when authentication is based on this approach.
  • Configuring an appropriate discovery service (IoT Hub Device Provisioning Service in Azure), which is able to assist during the provisioning and when device is authenticated to sent the settings to the correct gateway service (IoT Hub).

When the back-end configuration is done it is possible to start the provisioning process, initiated from the device.

Fig.2 describes detailed the common approach for modern zero-touch device provisioning.


IoT Provisioning Concept

Fig.2 IoT provisioning concept.

Next article will be focus on the solution design and implementation about device provisioning when using Azure based IoT solutions with IoTHub and IoTHub Device Provisioning Service (DPS).

If you want more information feel free to contact me at Follow my blog :  .

You can learn more also if you follow me on Twitter @mihailmateev , and stay in touch on FacebookGoogle+LinkedIn and Bulgarian BI and .Net User Group !

Posted in Azure, Device Provisioning, DPS, IoT, IoT Device Provisioning, IoT Solutions, IoT Suite, IoTHub, IoTHub Device Provisioning Service, Provisioning, Zero Touch Provisioning | Leave a comment

Azure IoT Solution Architecture Best Practices (Common Recommendations – part 1)

When we are talking on successful design of IoT solutions there are two different aspects:

  • Common (technology agnostic) principles and recommendation.
  • Knowledge on a specific platform, which provides components to build IoT solution (like Microsoft Azure)

Considering work on specific solutions we need first to know the common principles and afterwards to has a knowledge on a specific platform, used for implementation.

  • Reduce the Complexity:

One of the biggest challenges you face when you are planning Internet of Things (IoT) solutions is dealing with complexity.
This is a common principle for all solutions – not only for IoT. IoT solutions involves many heterogeneous IoT devices, with sensors that generate data that is then analyzed to gain insights. IoT devices are connected either directly to a network or through a gateway device to a network, communicating with each other and with cloud services and applications.

We need to strategies strategies which help us to simplify development, manage complexity, and ensure that your IoT solutions remain scalable, flexible, and robust:

  • Assume a layered architecture

An architecture describes the structure of your IoT solution, including the physical aspects (that is, the things) and the virtual aspects (like services and communication protocols). Adopting a multi-tiered architecture allows you to focus on improving your understanding about how all of the most important aspects of the architecture operate independently before you integrate them within your IoT application. This modular approach helps to manage the complexity of IoT solutions.
The main question is what is the best design regarding the layered architecture:
The good design can be done in accordance with functional and non-functional requirements:
Considering data-driven IoT applications that involve edge analytics, a basic three-tiered architecture, captures the flow of information from devices, to edge services, and then out to cloud services. A detailed IoT architecture can also include vertical layers that cut across the other layers, like identity management or data security.

There are IoT solutions based only on two main layers: devices and back-end services (usually cloud based). Solutions, which to not require preliminary processing or prompt response after real time analysis can be based on  such kind of architecture.

  • Implement “Security by Design”

Security must be a priority across all of the layers of your IoT architecture. We need to think about security as a cross-cutting concern in your IoT architecture, rather than as a separate layer of your IoT architecture. With so many devices connected, the integrity of the system as a whole needs to be maintained even when individual devices or gateways are compromised. We need to have a security in every module, every layer and for the overall IoT Solution:

We need to adopt standards and best practices for these aspects of your IoT infrastructure:

  1. Device, application and user identity, authentication, authorization, and access control
  2. Key management
  3. Data security
  4. Secure communication channels and message integrity (by using encryption)
  5. Auditing
  6. Secure development and delivery


  • Data Model

IoT solutions are data-centric. Each IoT application has a data model that includes data from the devices, as well as user generated data and data from outside systems. The data model must support the use cases in accordance with the solution design / customer requirements. 
Most of IoT systems need to have more than one data storage where we need:


  1. Raw data storage (blob, file, noSQL)
  2. Metadata storage (SQL, optional noSQL)
  3. Aggregated data storage (SQL, noSQL)
  4. Configuration data storage (SQL, optional noSQL)


  • Raw Data Storage

Raw data storage is fundamental for IoT Solutions, where one of the main functionality is to ingress information from sensors. Usually we have no need of 
schema for these messages. Message format could be updated without any changes in DB structure.
One of the main requirement for most of solutions is this storage to be scalable (partitioned). Relational (SQL) databases support not so very good partitioning and from another hand we need to be aligned to a specific database schema. SQL databases are also most expensive than noSQL solutions or binary/blob storages.

We have several often used options to store the raw data:

  1. Blob storage or files (file storage)
  2. Cheap noSQL storage (like Key Value databases)
  3. Document oriented databases
  4. TIme series databases

Blob storage or files. There are many reasons to consider Binary Large Objects (BLOBs), or what are more commonly known as files. The storage is cheap and easy to use. The main disadvantage is that we need to consider additional service or custom implementation to search specific information using this storage.

Key-value stores save data as associative arrays where a single value is associated together with a key used as a signifier for the value. Cheap options lioke Azure Table Storage offer limited options for searching (only on the main key and partition key). Others (like Redis or CosmosDB/Table API) allow multi indexing, but on a higher price.


Different options for cheap storage in Microsoft Azure.


Document oriented databases offer support to store semi-structured data.  It can be JSON, XML, YAML, or even a Word Document.  The unit of data is called a document (similar to a row in RDBMS).  The table which contains a group of documents is called as a “Collection”.



Document oriented database design

TIme series databases: TSDBs are databases that are optimized for time series data. Software with complex logic or business rules and high transaction volume for time series data may not be practical with traditional relational database management systems. It is possible these solutions to be based on blob storage or document databases, but with additional logic, which allows users to create, enumerate, update and destroy various time series and organize them in some fashion.

Time series database

  • Metadata storage

Metadata storage usually contains additional information about main entities, which we have in our system (sensors and other devices, users and other abstractions, related to the applied area, for which will be used the IoT system), as well as the relations between these entities. Usually this information is not very big and it is easier to store it in SQL database.  There are solutions where metadata is stored in document oriented database, but usually the simplest approach is to design metadata model in RDBMS.

  • Aggregated data storage

The collected data is being processed (often in real time ) where row data is enriched and aggregated. Aggregated data is used for reports on specific period of time bases. Aggregated data is not so bug as a row data and usually is stored in SQL database or in document oriented database.

  • Configuration data storage.

All setting for the IoT system need to be stored in a storage. Most often this is SQL database, because data is small, but RDBMS support rich and easy to design schema. It is possible to use also noSQL for configuration settings.


If you want more information feel free to contact me at Follow my blog :  .

You can learn more also if you follow me on Twitter @mihailmateev  , and stay in touch on Facebook, Google+, LinkedIn and Bulgarian BI and .Net User Group !

Posted in Azure, Azure Blob Storage, Azure DocumentDB, Azure Storage, Cloud, Internet of Things, IoT, Security, Solution Architecture, SQL | Leave a comment

Azure IoT Solution Architecture Best Practices (Introduction)

Internet of Things (IoT) is one of the areas in IT industry which is growing rapidly last several years and will continue to grow next years. It is expected from 2015 to 2025 the number of connected devices to grow 5 times and to have 75 billion connected devices in 2025 (15 billion in 2015) regarding Statista.


In a series of blogs will be shared experience in design of IoT Solutions (best practices). Most of the content will be related to solutions, based on Microsoft Azure.

IoT Basics:

The modern IoT solutions could be very complex and need expertise in different areas:

  • Hardware
  • Networking
    / connectivity

  • Solution design
  • Application development
  • Security
  • Business intelligence and data analytics
  • Machine learning and artificial intelligence (AI)

1. Hardware:
In the heart of IoT are the billions of interconnected “things,” or devices with attached sensors and actuators that sense and control the physical world. Now technology offers many imbedded devices (many of them quite powerful) , “smart” devices, used in different industries: automotive, energy area, healthcare, logistics, wearables etc.

2. Networking/ Connectivity:
The is another one of the key aspect of IoT. Growing of internet and global connectivity allows much easier to connect embedded devices with back end servers worldwide. Now there are much less limitations, related to connectivity between devices and server applications, but network design is a major part of the solution architecture of each Internet of Things system.
In addition to network design, developers should have a working knowledge of network standards, protocols, and technologies. These include wifi, Low Energy Bluetooth, Zigbee, cellular, and RFID technologies used in consumer applications, as well as Low Power Wide-Area Network (LPWAN) technologies like LoRa.

3. Security:
Security is one of the biggest concerns in IoT.  Communication between embedded devices and back-end servers via internet can give many options for vulnerability. Security must be built-in at every step of the design of the system.  Other critical issues that are closely related to security include data ethics, privacy and liability.
The biggest challenges in IoT security are:

  • The physical and network access to embedded devices and their data
  • Back-end security
  • Communication between embedded devices and the solution back-end

Most of the embedded devices are still not so powerful to be possible to encrypt/decrypt data if for communication is used some cryptography algorithm. This make these devices more vulnerable and it is need to put it behind firewall.
The approach how to establish the communication (part of the solution design)  is also very critical for the security of Internet of Things solutions. 

4. Business intelligence and data analytics:
As the number of IoT devices transmitting data increases, big data turns into really big data.  Design and implementation needs big time data management skills to securely, fast and reliably ingest, store, and query big amount of heterogeneous data collated from these devices. This data management need to allow scalability and extensibility.

Many IoT devices generate latency or time-sensitive data, so it is necessary to filter or discard irrelevant data. Key technologies and platforms for data analytics that IoT developers should develop skills in include Hadoop / HDInsight, Spark, and NoSQL databases like CosmosDB, MongoDB, IBM Cloudant.

5. Machine Learning and AI:
Machine learning and AI skills are the final must-have skill for IoT developers and architects. Intelligent big data analytics involves applying cognitive computing techniques drawn from data mining, modeling, statistics, machine learning, and AI. These techniques are not appropriate for real-time data processing, but it can be applied in real-time to sensor data streams for predictive analysis or to autonomously make decisions in response to incoming data and can also be applied to historical data to identify patterns or anomalies in the data.

6. Application design and development:
This series of articles, starting with the current one are actually focused on architecture and design of IoT Solutions. These two aspects are the most important for an solution. Design is critical to have properly working, reliable and well performing solution. Technology is having progress very fast and the solutions design should be actual for the next several years. Developers should keep track of emerging frameworks and developer kits that they can leverage for rapid prototyping, as well as IoT platforms that provide infrastructure and tools to help automate building, deploying, managing, and operating IoT applications.

Next several articles from this series will be focused on the best practices in IoT slutions design and implementation.

If you want more information feel free to contact me at Follow my blog :  .

You can learn more also if you follow me on Twitter @mihailmateev  , and stay in touch on Facebook, Google+, LinkedIn and Bulgarian BI and .Net User Group !

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

How to simulate IoT Gateway using VMware Workstation

The IoT systems are one of the most interesting software solutions during the last last few years.  These systems collects information from different sensors and send this information to the back-end. Data is being processed ( aggregations, different kind of analysis ). Usually these solutions has separate storages for:

  • raw data
  • processed data
  • configuration data

The application created for this article series is a simple IoT solution implemented with C# and contains the following parts (see the picture below):

  • Managed objects (sensors), that measure specific values like temperature, speed, displacements, etc.
  • Gateways: devices that collects information from sensors and communicates via
  • Back office – set of services and applications that are used to ingress, process and store data, manage and monitor the whole solution.

The simple prototype used for this article includes:

  • Gateway (.NET application)
  • Gateway service: a web service, implemented on .NET and hosted in Microsoft Azure, that communicates with the gateway and stores data to a cloud storage ( Azure SQL Database ) 
  • Gateway service: IoT Hub – this is the recommended messaging service for IoT solutions in Azure
  • Azure Stream Analytics
  • Azure SQL Database 
  • PowerBI Reports, that represent some statistic, based on the generated messages


Quite often during the development process usage of real gateway ( embedded devices like Raspberry Pi 2 and 3, MinnowBoard Max, DragonBoard 410c ) leads to more complexity and technical challenges. That is the reason to consider usage of “Simulated gateway”.

Simulated gateways could be just a client application, adapted to work on the development machine or virtualized client device.

The first approach can lead to some differences between development environment and the real production environment: development machines are usually much more powerful and often it is not possible to have the 100% identical codebase as this one on the embedded devices because of some differences in the SDKs.

Virtualization of the embedded devices gives from one side possibility to run the same code, that developers will use in the real devices, but from another hand gives a flexibility, because software engineers are less dependent on additional hardware during the development phase.

There are different virtualization platforms: Hyper-V, VMware, Virtual Box etc. Which one is better to choose ?
Embedded devices have some specific features, so probably it is better to consider on which platform can virtualize these devices relative easy and have quite good performance.

If you need different VMs on your workstation it is not possible to use Hyper V at the same time when you use other virtualization platforms. When you enable Hyper-V it changes the way windows on the host loads, it loads the hyper-v hypervisor and your host OS is actually running on top of that hypervisor

This is because the Hyper-V role is installed and this conflicts with VMware Workstation. To disable Hyper-V from starting the following command can be used:

  • bcdedit /set hypervisorlaunchtype off
  • A reboot of of the Windows OS is necessary.
  • To enable the Hyper-V role again use the following command:
  • bcdedit /set hypervisorlaunchtype auto
  • A reboot of of the Windows OS is necessary.

If you don’t want to change the settings any time you can create a multi-bootable configuration:

You could make a new boot entry. You can then choose on reboot whether to boot with Hyper V turned on or not.
This is useful as if Hyper V is installed then Virtual Box or VMware will not work. You get a message VT-x is not available (as Hyper-V is using it).
1. Install Hyper V
2. bcdedit /copy “{current}” /d “Hyper-V”
3. bcdedit /set “{current}” hypervisorlaunchtype off

More details you can find in this blog post:
Creating a “no hypervisor” boot entry – Ben Armstrong – Site Home – MSDN Blogs

How to setup Windows 10 IoT Core virtualized environment.

  • Downloads

You will also need a Windows 10 Core image. You can get one from Microsoft here. Choose Download Windows 10 IoT Core for MinnowBoard Max. This is important, because its easier to virtualize the x86 build. We now have to install it. Microsoft has bundled Windows 10 Core in an installer together with some tools for helping you provision it on a sd card and controlling it. So go ahead and run the downloaded installer. After the installation is complete you can find the image in C:\Program Files (x86)\Microsoft IoT\FFU\MinnowBoardMax


  • Setting the virtual hard drive

You need to expand the ffu and create a virtual hard drive that VirtualBox can use. To do that we are going to use a 3rd party tool called ImgMount. it’s a community tool that can be found on xda developers here. What it does is it reads the ffu, converts it to a vhd virtual hard drive and mounts it. Download it and place it some place easy to call from the console. Now follow these steps:

  • Open a privileged powershell. Right click on powershell icon and click Run as administrator.
  • “cd” into the Windows 10 core image
  • Run the ImgMount tool providing the flash.ffu as an argument. You should see output similar to this:


  • you have to unmount the image. Open Disk Management inside of Computer management. You will see a new disk that shows below your hard disks. To unmount it right click around the disk name and in the menu that opens select Detach VHD. A new dialog will open that will give you the location of the virtual hard disk that ImgMount has created. Now press OK in the dialog to unmount the virtual hard disk.
  • Navigate to the location of the virtual disk that we saved in notepad. After being unmounted it can be safely moved. Move it to the place you will going to store the virtual machine.

Using this approach it is possible to have a virtual disk in VHD format. There is additional work to run this VHD on Hyper-V. Hyper-V virtualization of Windows 10 Core devices is not a subject of this article.

  • Running Windows 10 IoT Core on VMware

Both VHD and VMDK contains hard disk image which used by virtual machines. Actually a VHD can be converted to VMDK format and used as the hard disk image for VMWare virtual machine. The virtual machines created in Microsoft virtualization products can be easily converted VMWare virtual machines with several pro0ducts like  VMware vCenter Converter and some free tools like StarWind V2V Converter .

StarWind V2V Converter is used in this example to convert the VHD file to VMware VMDK virtual disk.

  • In VMware Workstation you need need to create a new machine using a custom configuration.
  • The OS should be set to be installed later
  • The type of the OS is Windows 10
  • The firmware type should be EFI
  • The disk controller should be SATA
  • You need to use an existing disk adding the converted VMDX file.

That is all: you can run your device:


New updates can be installed on the existing VM


The graphical UI and command console are available in the same way like you connect an external monitor to the embedded device.



All running devices are available in the IoT Dashboard.


Next blogs on Windows IoT Core will demonstrate how you can develop and deploy a simple IoT solutions in Azure with much more details

If you want more information feel free to contact me at Follow my blog :  .

You can learn more also if you follow me on Twitter @mihailmateev  , and stay in touch on Facebook, Google+, LinkedIn and Bulgarian BI and .Net User Group !

Posted in .Net, Azure, BI, BI & .Net Geeks, BI & .Net UG, BI & .NET User Group, C#, IoT, IoT Suite, Microsoft, Microsoft Azure, Uncategorized, Windows 10, Windows 10 IoT Core | Tagged , , , , , | 1 Comment

SQLSaturday #432 CapeTown – Event Recap

I’m very glad that I attended as a  speaker at SQLSaturday #432 Cape Town. It was my first SQLSaturday in Cape Town, South Africa an in Africa in general. It was the first time when I presented on Azure Stream Analytics – the new SaaS complex event processing services offered from Microsoft for Microsoft Azure. I also presented for the first time session, 100% focused only on Entity Framework 7 – the new ORM from Microsoft, offering a new different approach using ORMs in the complex prokects.

I have shared the experience that we already have with my team in Stypes / ICT Automatisering using Stream Analytics as one of the data stores for iOTA ( Internet of Things Analytics ) – the IoT solution of ICT Automatisering / part of ICT Group.


The event was held on Saturday, September 12th  at the River Club – Cape Town observatorhy area. 

Administrator of the conference was Jody Roberts, Lead of the local SQL user group. Joddy is also a PASS Regional Mentor for MEA ( Middle East ansAfrica ) .  He is the most active community geek in the region, helping also in the organization of many other events in the region.  Jody organized this year SQLSaturday Cape Tiwn for the fifth time.

This was my first event in South Africa nad the third my event in MEA ( as a speaker at SQLSaturdays in Istanbul). Infragistics Inc. was the only one component vendor with a speaker at the conference. Participants gave a good feedback about my presentation.  There was an interest in the  IoT solutions and different options for data storages, related to these systems ( like Azure DocumentDB.

This event was also the biggest one SQLSaturday, ever organized in Lisbon


  • There was 20 presentations in 4 tracks
  • Speakers from USA, UK, India, South Africa and Bulgaria 
  • nearly 200 attendees on site

Thanks to the whole SQLSaturday Lisbon team for the awesome organization and hospitality!

@ River Club, Cape Town




If you want more information about the event and PASS community feel free to contact me at Follow this event on Twitter with hash tags  #sqlsatCapeTown and #sqlsat432.

You can learn more about the PASS events if you follow me on Twitter @mihailmateev  , and stay in touch on Facebook, Google+, LinkedIn and Bulgarian BI and .Net User Group !

Posted in Azure, Azure Storage, Azure Stream Analytics, Entity Framework 7, SQL, SQL Saturday, sqlfamily, sqlpass, sqlsat432, sqlsatCapeTown, Stream Analytics | Leave a comment

SQLSaturday #369 Lisbon Event Recap

I’m very glad that I attended as a  speaker at SQLSaturday #369 Lisbon . It was my third SQLSaturday in Lisbon and definitely the best ever one in Europe regarding to my experience. It was the first time when I presented on DocumentDB – the new SaaS noSQL database offered from Microsoft for Microsoft Azure.

I have shared the experience that we already have with my team in Stypes / ICT Automatisering using DocumentDB as one of the data stores for iOTA ( Internet of Things Analytics ) – the IoT solution of ICT Automatisering


The event was held on Saturday, May 16th  at Microsoft Portugal, Lisbon.

Administrator of the conference was  Niko Carvalho Neugebauer  Lead of SQLPort (SQL Server User Group) and BITuga ( TUGA – Business Intelligence User Group) of Portugal.  He organized an amazing team of volunteers who organized this awesome event.

This was the third my event in Portugal ( as a speaker at SQLSaturday Lisbon / Portugal ) and the sixth SQLSaturday in Portugal. Infragistics Inc. was the only one component vendor with a speaker at the conference. Participants gave a good feedback about my presentation.  There was an interest in the  IoT solutions and different options for data storages, related to these systems ( like Azure DocumentDB.

This event was also the biggest one SQLSaturday, ever organized in Lisbon


  • There was 49 presentations in 7 tracks
  • More than 540 registrations
  • around 40 Speakers from many countries

Thanks to the whole SQLSaturday Lisbon team for the awesome organization and hospitality!

@ Microsoft Portugal, Lisbon






If you want more information about the event and PASS community feel free to contact me at Follow this event on Twitter with hash tags  #sqlsatPortugal and #sqlsatLisbon.

You can learn more about the PASS events if you follow me on Twitter @mihailmateev  , and stay in touch on Facebook, Google+, LinkedIn and Bulgarian BI and .Net User Group !

Posted in .Net, Azure, Azure DocumentDB, Azure Storage, Azure Table Storage, BI, BI & .Net Geeks, DocumentDB, EF, IoT, NoSQL, SQL, SQL Saturday, SQL Server, sqlfamily, sqlpass, sqlsat369, sqlsatLisbon, sqlsatPortugal | Leave a comment