Speaker 1 0:00
There's a very important distinction. We'll talk about this but you know, it's not OPC UA or MQ TT. It's OPC UA and MQ TT. It's OPC way and sparkplug. It's and it's not or it's amazing to me the number of people who speak up about the OPC UA spec and I've never read the spec. If the moment you read the OPC UA specification, you realize this guidelines and it is specification. It's literally a list of options. There are core is more specification than anything, go ahead.
Speaker 2 0:28
I've always wanted to take and print 5000 pages and 18 pages and take a picture of those. And you know, Andy and I, when we start Andy is big. And Andy had been doing a lot of development with IBM. And he said, if anything gets over 100 pages, you're going to do something wrong and software.
Speaker 1 0:51
That's vacation should be minimalist. Yes. You should only be specifying what's required. Yep. And then everything and then every option is assumed. Yeah. All right, so.
Speaker 1 1:12
And welcome to the industry Ford auto community podcast, sponsored by 4.0 solutions and the mastermind accelerator programs 16 weeks starting in June, going through the end of November. I am your host with most Walker D Reynolds. This is a very special podcast. We're actually shooting this as part of the third hour of our monthly mastermind session on April 12 2024. Our guests today are Arlen Nipper who is the co founder co inventor of MQ TT, and the Chief Technical Officer at Cirrus link right and a partner at organization, Aaron semley, who is the CTO at high byte. And Matt Paris, who is the director of something at GE higher, you're in digital at GE,
Speaker 3 2:00
right? Correct. Quality factory test systems. Perfect,
Speaker 1 2:05
then you guys may know Aaron as the guy who, for the most part has done the bulk of the build for the high byte intelligence hub platform. Matt Paris is a guy who produces a lot of written content analysis on the OPC UA specification and MTCT spark plug in specification. And Arlynn was critical in writing the specification for MQ TT and for the sparkplug specification. So the conversation today is centered around NTT sparkplug. And we have a long list of questions from the community that we've gone through that we're just going to kind of go through and have this conversation. So with that, why don't Arlen let's start with the origin story for MQ TT, where Where Where did MQ TT come from? Why did you Anandi bother to invent a new protocol? And kind of how has it gotten to where it is today?
Speaker 2 3:03
Okay, I will start at the beginning. So just to kind of give some background, what was I doing for the first 25 years in my career before Andy and I even met? So, you know, I came out of Oklahoma State in 1979. And luckily, Koch oil had just opened refinery in Medford, Oklahoma, and Koch was non union. So I said, Well, what do I do as an engineer and they go, we don't know, just we're building booster stations to build the Texas just go out there and do stuff. So literally, I got to do everything from running the ditch, which to putting the conduit in the ground to somebody explained to me, you know, what a vacuum tube starter was Toshiba had just come out with their vacuum tube starters. And Modicon had just come out with the Modicon 484, which, basically the size of a refrigerator. And I got to do you know, everything from hooking up the pumps to the valves, you know, they explained to me, why is there a limit switch open on open and a limit switch closed on open on a motor operated valve? And what's the four to 20 milliamp transmitter? And why is it four to 20 milliamps and not zero to 20 milliamps and, you know, all of the background that you've got to know as you get into automation, if you don't know the physical equipment that you're connecting to. It's kind of a, the conversation kind of stops, you know, and you go, Okay, well, you know, come on guys. We're interfacing to real sensors and real equipment out in the field. That's, that's job one. And let's worry about IoT later. So, great career. I mean, I learned and at that time, we had really good communication system. And it was a company called at&t. And if you were doing SCADA, you could go to them. They were heavily subsidized. And they would run a four wire circuit, anywhere you wanted. I still remember, we put a tank farm in Stockholm, Oklahoma. And when he seven miles an ATM tickets, no, no problem. We'll run a phone line out there. And we had 300 baud modems. And you would get on with Toll tests in Oklahoma City. If you had a problem, they would isolate the circuit off for you. They can oh yeah, earlier we can. There's some noise in that circuit, we'll get somebody out and fix it. And so literally, our entire pipeline control system was running on 300 baud modems. And we had a DEC PDP 11 with 128k of memory in Metro Oklahoma, and that was our SCADA host system. So you know, this high circle when this is 1980 to 1988. Got it? Good. Right. So and I got to so we had at the time, Pano built the RT use that Coke was using, and that tanto unit, this is before you Rs. Right, start bit eight data bits stop it. So what they would do is they would take a shift register, and modulate the data onto the wire. So you could get like, you know, seven preambles and then 123 bits, and then the modem shut off. And then you had to take those 123 bits and take them apart and put together the message. And then, oh, these are the status bits. And these are the Analog Bits. And here's the setpoint bits. So, you know, yours came out, we remember we went from these old quantum are to use with the Reed relays that were putting the data onto the wire, we went to a more modern Tanno are to you are sorry, quantum RTU. And it had a u r. In it, it actually had a protocol, you sent it, the poll message, and it would send back the response. So anyway, went through code that was great. And I had the opportunity to work start working with a company called Nova tech that ultimately became Archon control systems. And I designed embedded computers. That's what I did, you know, z at base, embedded computers. This is this is this is from ADA to the 90s. Yep. And something bad happened in the early 90s, if you're old enough to remember is 18 T got split up by the government. And all of a sudden, all of these real, cool multi drop circuits were getting unreliable, they were getting expensive. So now instead of calling AT and T toll test, you called Conroe telephone company in Conroe, Texas and say, Hey, I've got a problem with my modem goes, What's a modem, and they go, and so on and so forth. So at our comm, we made a very good business and building protocol converters. So we would convert these old lantronix and Tanno and quantum protocols and out in the field, and we would convert those to standard Modbus. And we made a very good business. We at the time, we had about 75% of all of the midstream pipeline companies were using our product. Then as at&t got deregulated, and they it was really hard to get these multi drop circuits. All of the satellite companies came in so here came in G space net, Gilad Scientific Atlanta, at&t tried them. But guess what, they had their own proprietary backhaul protocol to get data from the site into the hub and then from the hub back into your SCADA system. So now, we had a double whammy of we had proprietary protocols in the field, and we tried to convert them to a standard Modbus. And then we had to put them in the proprietary transport layer, get them up to the hub, then get it from the hub back next to my DEC PDP 11, where I convert it back to Modbus and feed it into the SCADA host computer. So in 1999, the lib 66 got a brand new at&t tridem reset system that was using a backhaul called TCP IP. And remember, and when I tell the story, everybody goes well, yes or no,
Unknown Speaker 9:54
we take it we take it for granted. Yep.
Speaker 2 9:57
Yeah, but no, it If you were doing SCADA in the 70s 80s 90s, and you weren't using dec, equipment would Dec net? You weren't good luck. If you came in there with Windows, if you came in there with any other operating system, they would say, Thank you, sir. You know, there's the door.
Speaker 1 10:20
When did the conversation happen? So to go to the deregulation piece, for those of you who didn't live, that there's a lot of young, basically at&t was vertically integrated, and through antitrust decisions, whoever owned the infrastructure, and whoever provided the service, they needed to be separate. That's how 18 broken apart, at&t was not allowed to own both the infrastructure and providing this and provides the threat. Yep. And, and same thing happened in electricity, right around the exact same time, it used to be able to own the physical infrastructure, the wires, the power plants, all that stuff, and be able to sell the electricity to the end user. You can't do that today. So the route through deregulation, or through antitrust, those companies all got broken up. And that's what happened 18 T. But you as you can tell, the moment you do that. Now, what you've done is you've lost the value proposition for the provider changes. So the service provider changes, so, so when you guys are having the original conversations, and you're like, hey, I mean, how do you come to the conclusion, we have to invent a protocol.
Speaker 2 11:28
Okay, so, Phillips 66 had a an OA at the time it was television. But now it's Aviva. But at the time it was television, and they had an oasis DNA system. And it was polling a quantum protocol and Modbus. And so here, what we would do is we would create a socket connection out to the remote site, we would send the poll message, and then the edge computer would create the response, and it would send it back over that over that TCP IP socket, and then close the socket. And then we go to the next booster station. So number one, is that Phillips 66 figured out that every bite on the wire over a year was about $20,000. And if you think about it, here, we were in Bartlesville, Oklahoma, we had 600 Booster stations. So on one pipeline that was 100 Booster stations. So you would pull the first station, get all your statuses, get all your analogs, then go to the next booster station. And that polling algorithm, it took seven minutes for you to update your analogues. And it took four minutes to update all your statuses because they would pull their statuses twice as fast as their analogs. And so Steve Conan was the SCADA manager for Phillips 66. His good friend was in it, he got hasty, we're using this new technology IBM just brought in it's called service oriented architecture. It's great, you have this message oriented middleware. And our IT applications are decoupled because one application can publish information, and two other applications can subscribe to it. But if you change your applications, you're not going out and rewriting all of it. In other words, you're not hard coding applications to applications, right. And if you think about what we were doing with our edge compute technology we've done at our comm is it was kind of Pub Sub, but the subscription was implicit. IE, you pull this and it's gonna come back to the SCADA system, you're gonna pull this it's gonna come back to the SCADA system. So Steve, start thinking, he goes, Hey, Arlen, what if we could do Pub Sub for our SCADA system? And so we thought about at the time, IBM was already providing technology for Phillips 66, we got a hold of Andy Stanford Clark, who was working with big IBM, at Herzl, England. And Andy and I got together and it goes, Okay, Arlen, here's big MQ. And first of all, it had a 5000 byte header, just to put a piece of data on the wire. You started with 5000 by a single Andy, there's a really big build right there. Just to get, you know, a message on the wire, we got to get this down. So we kept talking about it. And I was over in in Herzliya, England at the time, and it would come in, he said, Hey, I was in the shower. I just figured out how to save another two bytes. And we got it down and got it down. And finally, we ended up with the three byte overhead. And we came up with all the verbs that we needed. We still had one problem is that we wanted to publish on the As TCP IP network, but we needed to know it was out there, because you're going to trust that you're going to do everything report by exception. Except what if it's not there? And that's where we put the death certificate into MQ TT? Was that Oh, no.
Speaker 1 15:17
Was that the original? Was that the precursor for spark plug?
Speaker 2 15:25
The the No, no, no, that people have to understand spark plug has birth certificates. But the death certificate is part of the underlying MQ TT in the first place.
Speaker 1 15:37
All right, good is a good distinction to point out because most people associate the death certificate with sparkplug? No.
Speaker 2 15:45
Okay, so So you have to, so at that time, we got it, they let us prototype and so Phillips 66 had a another Triton system, we got to start prototyping it, we got it up and running. And so Andy, and I think it took us six months, we cut version three dot o of MQ TT, and we got it up and running. Now I will have to say that we I think we it took us almost 18 months and negotiate with the IBM lawyers to keep MQ TT open. Because as you can imagine, by t 99, IBM, the first thing they wanted to do was patent MKT T. Right. And we knew we knew if that happened, it was going to kill it. So becomes another proprietary. Yeah, we've never, we would have never heard about it. So we got it up and running. We put in Kitt in all of the edge are calm edge devices that are out in the field. And so they would pull everything locally, and then package that up and use we'll call it sparkplugs Z. Okay, so that was that was the binary protocol that or the binary message that John Luhan, who was engineer with me, and I develop that, and then we help them write the Omni comm driver for Oasis. So it would understand MQ TT, and you know, being able to work with the Oasis DNA system, there's more about that later. 25 years later, Aviva forgot that they were the first people that did MQ TT, and you know that they didn't have it. So using sparkplugs Z Yeah, like that, Zack, and then we got everything up and running and slowly migrated that out. And we were done. We went from seven minutes on the analogues to under 10 seconds, and all the statuses under four seconds, because we could pull so much faster. And here's the the mind warp that even today, 25 years later, everybody has a problem with when we think operations, I've got an RS 232 connector on my computer, therefore I must pull there must therefore I must wait to be asked before I can send my data. And so imagine, we've got 500 Booster stations, they're all being pulled out at the edge. And then a valve goes in transit, they publish it in. And that was the thing that everybody has a problem with, when we moved from Rs 232 or serial pole response to MQ TT is oh, now we could say we can publish data whenever we want. Whenever it changes, that's report by exception. The bandwidth on the reset on the at&t tried went down 80% And the response time was an order of magnitude faster. And so that was the invention of MQ TT.
Speaker 1 18:54
So Phillips 66 their infrastructure becomes MQ TT and 99 2000. Yep. And and when did you guys take the MQ TT spec and, and give that to Eclipse to manage? When when when Where did that come from?
Speaker 2 19:12
That was about I would say that was about eight years later.
Speaker 1 19:17
Okay. Who was managing the spec? It was a you and Andy. Yeah. Okay. Well, so if you look at the origin story, MQ TTS invented in 99 2000, it's used in commercial applications. It's used, you know, you see it all around the world. It's being used, most people just don't they're using MQ TT doesn't really get introduced into the mainstream for industry until was it 2014 When you did your presentation at ICC where that was sort of the big coming out? Was that for 20?
Speaker 2 19:47
That was that year that you were setting in the audience was 2015. And right, I would say, Yeah, I say but say that again.
Speaker 1 19:58
You got to 15 you gap between when it was introduced and when when it was invented. And when it was introduced to the primetime in industry, a 15. Greg, can you funny,
Speaker 4 20:10
I got a funny side story. So I must have been 2016. I was like an Intel IQ or an art not not until, what's the intellect is that the conference presentation on Kepware is IoT gateway that we were releasing that supported MQ TT, and I'm up in front of the audience talking MQ TT, and the presentation ends in Ireland steps up and talks to me. Never met him before. And he's like, I was the inventor of MQ TTS. Like you did an ok job. And I was like, that's probably the best compliment I could get. Right. Like, you have no idea crazy. Yeah. So
Speaker 2 20:42
but yeah, Walker, the so what I did at ICC and 2015, the demo that I did, we were using the only device in the world was that old art comm director using Spark Plugs Z. And that was the demo that I did. And I said, Okay, I'm going to discover 300, PLCs, and all their tags, and I'm going to do it in like 15 seconds. And everybody got the Escher
Speaker 1 21:11
green, all populated in real time, it expanded in time, and in
Speaker 2 21:16
real time. And I still remember today is that at the end of ICC, when you do your presentation, there's audience q&a. And that's where, you know, they have a mic and they hand it around. And this one guy, you know, any questions and this guy in the audience raised his hand, and they give him the mic. But he doesn't stay there, he walks up, he walks up on the stage, walks by me, turns to the audience and says, This is the future of automation. And that's the first time I had met Walker Reynolds.
Speaker 1 21:50
I literally said the world just changed in this patient. The world just changed. I turned to the whole audience. I'm like, what we just saw changes everything.
Speaker 3 22:01
His name is Walker Reynolds not sit down Reynolds. So that
Unknown Speaker 22:05
is that true. So
Speaker 1 22:07
then spark plug B because that was the original spark plug, right? Most people don't know there's a spark plug a spark plug B and now you've got three so that you can have mine. So
Speaker 2 22:17
what happened there is that we had the Alexis director and we had a lot of people wanting to do this. And so for our own sanity, we knew sparkplugs z. If I even tried to document it for you, I couldn't I've written I wrote a document for Phillips 66. But it is so cryptic. It was binary. I mean, literally, John Luhan and I both came from machine language. We were really good at taking bit, you know, this bit set than the next seven bits or anyway, it was very cryptic. So for our own sanity, Wes and Chad and the guys at Cirrus link goes guy. Let's let's define something. So that was sparkplug A. And that led us at least, at least get it into a modern world we tried to make. And so here's where we started playing the game. Okay, Spark or MKT was super efficient, smart plugs, see, how far do we take it? And how, you know how much leeway do we have? We want to keep it lean and mean. But we want to keep it understandable. And so that was the to the the juggling game that we had. And then we did sparkplug a because our column was purchased by Euro tech. And the they did a not a Oasis, but OSGi. The OSGi is a sparkplug or sorry, is a Java container. And they had a public protobuf specification they had out there that was open source and we've okay, we'll start with that. We quickly figured out that wasn't really a very good protobuf schema, we need to expand it. So that's where we came out with sparkplug B. And again, at this point in time, this is serious link. And we had customer you know, hey, you know, it was on our public GitHub site. That was it. And then we had customers like Chevron and Exxon going Arlen who owns this and we go Well, guys, it's on our public, you know, you can download it. It's public specification, but not really. So here was the decision point. Where do you take a industrial protocol that has a software stack? Where do you take it? Do you take it to I triple E? Know out they would know get they don't know Jenkins build or GitHub? I mean, what would you do it i Tripoli? Do you take any W three C? Well, I already had experience with the Eclipse Foundation because the PA and I knew Mike milinkovich. And I knew a lot of the people, Andy and I knew a lot of the people because remember, the Eclipse Foundation was IBM, taking $500 million of software and Mike milinkovich and say here, go start open standards company. And so we knew all the guys that Eclipse. So we said, you know, Mike, I call Mike milinkovich. And say we've got this cool protocol, we know leverages in Qt T, which you guys already have Paho. So that's that's where we landed. That's why we landed at Eclipse. And then we had to start taking this Word document that I had written and turn it into a real conformance standard with all of the, you know legalities of how you do a standard. So it took us three years to take that first document, get it all cleaned up, get the reference implementation code, because remember, for you to do an open standard and Eclipse, you have to have a public referenceable implementation. And you have to have a TCK, the technology conformance kit. So we take a it's taken four years to get all that put together to get a formal document specification to get it released. And then the huge advantage is that Eclipse working with ISO IEC got it to be an international standard now. So we MQ TT sparkplug is now an i c ISO specification. So that's where we start. And look. That was the genesis of sparkplug B. It all came from a pipeline, real time control system, things like that. I know I'm gonna get glitches, which
Speaker 1 26:49
is why it's Device Centric. Yes, in that world, the device is the centerpiece of the universe. That's
Speaker 2 26:55
correct. Now, and that's why I love to have this conversation. I'm going to get all I'm gonna get questions from all over the place on why didn't you do this? And, you know, it's really cool that you got the standard out there. And now people can Oh, well, why don't you know, hindsight is 2020. Why didn't you think of this? Or why didn't you think of this by
Speaker 1 27:15
Nisar go all the time, when we talk about the report by exception. Thanks. So the first community question, right is I'll start with an easy one. Okay. When people say MQ, TT or sparkplug is report by exception, what precisely do they mean? And what part of the spec is report by exception? And what I and I'll let you answer the question, but what I say is, it report by exception is not specified, it is assumed but gives you the flexibility to not be report by exception if you want to, if you want to transmit on interval or on trigger, but the origin story is report by exception was a requirement in order to keep it as as much as little on the wire as humanly possible, right. But we'll start the first community question is when people say MQ TT er sparkplug is report by exception, what precisely do they mean? Okay,
Speaker 2 28:06
so what you have to realize is that uniquely, unlike any other middleware in Qt T is stateful. Right? So you what you do is one of the verbs that you have is you do a Kinect and with the Kinect, you give it your username, your password, and you give it your last will and testament, and that lives in the broker, and the broker comes back says, Okay, I know you you are connected, and then you can publish a message done, you know, and then you can subscribe to whatever you want. Now, I am stateful. I published all of my pressures and temperatures and bow statuses. But now since I am stateful, those are all back in my SCADA host system. I know what they are. And as long as I have state, now I can go look at all my pressures and only published the one that changed. I can look at Think of how many millions of times we pulled a valve. What are you? I'm closed, what are you? I'm close. I'm close. Well, with MQ TT, we publish clothes. And it stays that way, as long as they're in Qt T session stays up. And then if it goes in transit tomorrow, but we can pub in transit, and then open
Speaker 1 29:26
what is on the wire after after the client connects to a broker. And we have a stateful connection. What is what is on the wire? What will we monitor? If we're monitoring the wire? What will we see on the wire? Other than publishes to the broker?
Speaker 2 29:41
You will see a ping and a ping response. And you can ask me, Well, Arlen, why aren't you using the TCP state because all of your operating systems have state but the problem that we had with the visa system and we still have it today, is that the visa it says Some had an angling algorithm, so that if you were back here in Bartlesville, if you had rain fade and that connection dropped, you would know about it for like five minutes. So we figured out we could not trust the TCP IP layer to tell us when the connection is broken. So that is the only thing that we don't let TCPIP do from MQ TT, is we added that ping to give us that state. And if that broker doesn't see data, or doesn't see the ping, in the time to live, then it takes your death certificate, which you registered, and it publishes on your behalf.
Speaker 3 30:42
And that same function there about you added ping to the MQ TT to to account for a deficiency in TCP. There's also request today that even that's not enough, because if I'm on one side of a broker, bridge to another broker bridge to a device, I will not see the disconnection between the two brokers. So there's even some implementations that will do a ping of sorts or watch a heartbeat over an MQ TT topic just to monitor the right connections.
Speaker 1 31:14
And what's crazy is I do that in the client all the time. Like when people will say, well, that's, you know, I'll say, Well, I'm actually putting a watchdog in my client itself, using two topics, interposing one another. And that's how I'm monitoring state for my application.
Speaker 2 31:33
That separate net, right? And that that is what became the primary host ID in the sparkplug spec. Right? Because the the reason, and this may be on your list of questions, but I'll go ahead and answer it now is that think about Phillips 66 had that same problem is I've got, I've got my broke, and so Phillips had eight brokers. And that was the beauty. A lot of you if you haven't had to deal with doubt, dial up modems or four wire modems, and all the equipment he has, he had a terminal server on the deck, the terminal server was going to that split it out into Rs 232 ports, and the RS 232 ports were connected to the modems, and the modem was went on to the phone line. So if you were had disaster recovery, you had to take that entire set of equipment, all those modems, all of those terminal servers, and you had to replicate it for disaster recovery. Whereas when we figured out within Qt T, all I had to do is have a broker and another broker and my failover though, if that broker dies, well, that's fine. Everybody in the field can just swing to another broker. All of a sudden disaster recovery was natural. It was part of how MQ TT worked.
Speaker 1 33:00
So are Aaron I'm gonna let you ask the next question after asked. I'm going to ask the next community question and then I'll I'll let you ask one of Ireland. But so John Maldonado said, would you say that sparkplug he says parclick B. But it's really will refer it were referenced to spec as sparkplug since most current version is three. Would you say that spark plug is only applicable for process control? Only used in layers one and two? And are we better off using JSON payloads for level three and up and perhaps using JSON schema or web of things to define the payloads?
Unknown Speaker 33:37
That was for me, Walker.
Speaker 1 33:38
Yeah. Well, it's it's thrown out to Ireland, and then for compensation here for all of us. So would you say, let's start with what would you say sparkplug is I don't like the fact that he doesn't absolute terms, only applicable for process control. But why don't we answer that one first is sparkplug only applicable for sparkplug process control.
Speaker 2 34:01
I'll answer it and then I'll let Aaron and and Matthew, jump on. So what if you would have asked him that question nine years ago? Let me let me stay. Let's put that in perspective. Right? Remember 2000? Nobody knew about TCP IP. Right? Remember, nine years ago when I demoed MQ TT, I went to every customer I went to Conoco I went to Exxon, I went to all these customers and said, Hey, there's this thing called AWS cloud computing. You know, have you guys thought about that over my dead body? No way. You are never my racks in the basement. I go down to hug them every day. You know, you I am never moving to the cloud. So here you're asking me a question. Did I think sparkplug was going to be just for process control? I was Oh sure. What What else would you use it for? But to me what's amazing is you guys, the community all you guys Did you start using it for other things that I had no idea you were going to be using it for? And yes, well, now we're going to be talking about snowflake and, you know, unified namespace and all this, which, guys, that wasn't even on anybody's radar at the time. So I'll let Erin and Matthew.
Speaker 1 35:18
Yeah. Go ahead, Aaron. Matt, what do you guys think it should should sparkplug be? Is it only applicable for process control?
Speaker 3 35:28
Now you go, alright. So I would say, you can use it wherever you want. But it's going to be what limitations do come against is to make an appropriate for that application. I would summarize sparkplug as being a part as it is today, it's a great protocol for going from many things to a single, monolithic, I want to know about everything. So that use case is very similar to the SCADA use case where you have a monolithic SCADA application, or you have a monolithic historian. And the whole design of that architecture is those two things are supposed to receive everything of the devices go into it, where spark plugs starts, and you still have this idea of node ID device ID. So that structure is kind of hard coded in terms of the topic paths. When you start stretching out into other applications that are not, or the consumer, maybe as an a monolithic, I need to receive everything, it can be a consumer that is interested in three pieces of data that spread across multiple devices in different topic paths or buried within a metric. That's where a spark plug will start, you'll start to feel the stretch of sparkplug, trying to do those types of developments. Or if you're trying to do a system where it's very distributed, where at the bottom layer, you have multiple sparkplug nodes, an MQ TD broker above that, and then above that, you have multiple consumers. That's where spark plug also starts to show its limitations. For example, I'll just fill this one detail, and I'll give Aaron a chance to chime in. But if you have multiple consumers and sparkplug every time a consumer comes on board, it has to say to all the devices, give me your birth certificates, that all the devices issued the birth certificates, and that triggers all the consumers to receive new birth certificates. And they have to reprocess even though they were already in sync. So again, these are just this is how start sparkplug has been stretched in the past 234 years beyond what its initial kind of Genesis was, which is. Yeah, I'll go more. But
Speaker 1 37:46
I'll let Aaron go ahead here and your take on.
Speaker 4 37:50
Yeah, look practically where we see it used and successful, I would say just to be direct, it's a competitor to OPC UA down to device connectivity. So if you look at like connection to the device up to ignition, spark plug has a lot of value there today that we see customers get that value. As soon as you start to go northbound of that right into more it like systems, the protobuf you know having to decode the data, the topic structure, those kinds of things start to show. And that's a no way attacking the standard. I think the way Arlen like introduce where this came from 100% that it crushes it there. Right. Like it's fantastic. But I think in those more IT environments. One thing I'm waiting for is like one of the big cloud vendors is going to add support for spark plug and I don't think they are they probably should, if I were them, I would, I would definitely. But you know Arlen and the braid like snowflake bridge and stuff they have it's just to show some of the value there. But yeah, we see it definitely as a competitor to UAE in terms of connectivity, download the device, but north of that it starts to struggle with some of the issues, Matt,
Speaker 1 38:45
and what I want state, what I say is this is that spark plug is appropriate for creating a node endpoint that is going to publish into an infrastructure. A lot to what Matt said that as long as you understand that when you create a spark plug edge of network node and you are publishing into an infrastructure, what you are saying is is that if something updates, all things will update in, in that pack in that payload package, you have to pre define the structure and then you update the things that were in the structure the metrics that are in those, the structure, it is very, very good. For if I have that easy RAC PLC over there, or an Arduino for example, I have an Arduino Opta. If I want to take any data point, and I want to publish it into an MQ TT broker inside of my Opta, the first thing I have to do is define all those pins, right? I have whatever 12 IO on there. What what I've done is I've just written a sketch that turns the Opta into an edge of network node, and it publishes all 12 Of those, the IO plus any of the other variables that we have defined in just one payload. So the Arduino speaks as one unit all the time and As opposed to, I'm publishing various variables flat into an infrastructure. So it's really good for packaging something as an edge network node. Outside of that, if you don't need. Another thing it's really good at is pre packaging for a function. So I encode all the data I need for a, for a application to consume at a higher level in the stack, but, but it's specific for that application. So the application consumes the edge of network node, decodes everything that's in the payload does what it needs to do, and then publishes a back end. That's what I that's what I limit spark plug to. Um, and I think that that's most common spark plug lives alongside flat and Chitti. Five.
Speaker 2 40:48
Right. And then the, of course, the other thing now, it was interesting, your point Aaron, about, like Amazon and the IoT Core, and, you know, we all beat on Amazon for so long, that IoT cores actually became pretty good broker, you know, you can use spark plug on, it's great. But, you know, still Microsoft still can't figure out how to do a broker. But the other thing that, you know, it was funny with the Amazon, we had this Proserv, we're going with our own, we looked at your spark plug in, it's just, that will never work. And it'll be too expensive, because you're going through, you know, IoT Core. And it literally, they could not understand that we were taking multiple up to 10,000 process variables in one spark plug payload, right. They were going that, hey, if I've got 10,000 process variables, I've got to publish 10,000 separate messages. And then we'll go Oh, wow, IoT Core is just costing us a fortune. And I go, Well, we just say that it's 10,000 times more efficient to use sparkplug, then publishing each process verbal on its own topic. Now, you could you get extremes on both ways. And and of course, was sparkplug your right, Walker is it was very Device Centric. But now, you know that device? Its nose. That's really no. Yeah. And then you go on, and you're right, is that and I think Aaron and and Matthew both agree, is that you've got to take apart the pay if you want the UNs and you want to go all the way down to that level that it is there. But you've got to decode the payload to get the rest of the UNs
Speaker 1 42:39
let me the next question from Annabelle real quick. And then Matt, you can any follow up here because there's a big when we talk internally about what is MQ TT missing? What do we want to see, for me, I want to see a querying layer, where I can query the structure of an MQ TT broker I want I want to be able to, without having to use a wildcard which is what we use, now we use wildcards and, and then we and then we parse the structure, looking for the variables that we want, we're looking for a model. One of the things that's missing are things like methods on topics. That's one big one, right? I want to be able to right now I can go and queue TT subscribe MQ TT published passing a topic, if I'm publishing passing a value, right, and those things will do. They will, they will perform functions. But we also the ability, the spec doesn't contain a mechanism for defining how if I wanted to create a method that runs on a topic, so if I go, and Qt T dot topic dot run method, that whatever method, there's no mechanism for that. But right now, it's just sort of the Wild West. But what Annabel asks is, when these go together, are there any ongoing efforts, especially of the OSS kind, to create a standard or spec to build plug in architectures on top of MQ? TT. So one plugin could be to enable a REST API to interact with the broker. Another plugin could deal with security role based access control. Another plugin could deal with configs, udts, JSON, schema, enforcement, etc. Is there any even even if it's not a formal is Are there conversations centered around expanding the specification or creating new specifications that account for these use cases?
Speaker 2 44:32
Well, for that to happen, and there are discussions around that Walker, like you said today, everybody does that. And it's ad hoc. And I guess my problem with that is that it was hard enough just to get the the member companies so if I remember right, when we took him to TT to Oasis to get the Oasis standardization, the Cisco Microsoft IBM deck All those companies had to agree that we're going to take this into the Oasis standards body, and we're going to get the Oasis standard. And then that. So then MQ 25 dot o came out, and there was some, you know, companies and momentum behind that. But what you would have to have is where do you get a standards body together to define what you're saying. And I would say that's the hard part of that even with all of the work that seriously did in getting sparkplug, be out there and keep it open. And just just trying to have a think framework that people can follow is hard enough. And, you know, we need and I will, you know, I'll put it out there right now, is that for everything that we're doing in the sparkplug Eclipse sparkplug working group, that's a very small group of people. That's that's some seriously committers, some Opto, 22, committers, some hive MQ committers, but it's a very small organization. And we're constrained, right, we literally, these are all people doing work in their spare time, putting all this together. So the more people we could get to join the spark plug working group to get on the committers, to actually do some of the work, these specifications could move much faster. Okay.
Speaker 3 46:28
But I would propose that the best way forward is to get the best commercial product that's in the market, that is a consumer of this, whatever this function is, and to get buy in to get developed on there and made available to the market, and then you get coalition around. Okay, this is what its gonna look like. And then we can build from there, I would, I would say that this is how sparkplug got so popular with ignition. I mean, that seemed like the perfect platform of okay, we got an open platform here with ignition, you can download it, you can try it, you can use spark plug, and it can kind of be your gold standard of what spark plugs supposed to be. And you get that immediate realization of the value of i got this device publishing data, and I'm seeing the tags just populate right in front of my eyes. Yeah, whenever the consumer is of let's talk about this REST API, or this query or this browser, you can build the browser and have that interface standardized and say, you know, hold it up to the market of isn't as great, just have the two sides of it. And then let the market accepted.
Speaker 4 47:34
What's your take Arland on on that. So like simple one, the REST API to go look at the topic structure, right. So you don't have to subscribe to everything. Do you think that should be part of the MQ? tz standard? Do you think that should be something over a new standard that bolts into MQ? TT? Like which one? What are your thoughts on that? Good question.
Speaker 2 47:50
I don't think I don't think it should be part of I think you got to keep it simple. You know, keep it simple sugar, I guess they say The simpler you keep going back to my notion, the original N Qt T spec was 18 pages long. People can wrap their head around you know, most people can code up and MQ TT client and a couple of days, even if they don't have any reference implementation code. So the simpler we keep that now I don't know where you go, what standard body you go to, because if you were to make that a standard for how you query topics, so now you've got hive MQ, you got em Q, you've got a Amazon's IoT Core. You got Microsoft supposedly, with their new fabric. So even going and putting the more extensions on top of that? I don't know if you would go to the Oasis standards body because they own you know, the MQ TT standard. And do you put a group of companies together to go to Oasis to be able to do the things that Walker was talking about?
Speaker 1 49:07
Well, one of the things that I'm that I'm doing this test, right, I dropped in, you know, an example I'm doing this test where I have a very common use case and MQ TT is used for high byte is used for this every anybody who's using high MQ TT is doing this. You're taking existing OPC infrastructure, somebody's got an OPC server. It's got a bunch of lots of tags, either you can do explicit reads through that OPC server straight into the device and pull the value back or I can read a variable node that they put in the in the OPC UA namespace right? What a lot of people are doing is saying, you know, OPC UA is not appropriate for me to transport my data into infrastructure, I need to do it somewhere else I got and so there Everyone's building gateways and converting, you know, OPC UA namespaces into MQ TT namespaces. Like that is a very, very common implementation and I'm doing a test right now where I'm doing it. Okay? I'm just using MQ TT five. And I'm literally just breaking out channel device, you know, grouping tags. And that's my namespace, that there's no standard, I'm just breaking it apart how I want to. And using MTT five, then I'm using, I'm doing sparkplug where I'm breaking out apart the device when this case, it's kept server, and then I'm packaging one. sparkplug payload that is all of the tags inside of capsure. Right. And then kept server becomes an edge of network node. And then I'm also doing it with part 14 from OPC UA, and I keep asking the question, you know, I don't want to beat up OPC I legitimately want to understand why part 14 is not more widely adopted. Like, why? Because it's not being adopted, no one's adopting it. And there's only a couple of products I've even found that have adopted it. And they didn't really use the specification. I mean, they call it part 14, but and so I'm really struggling. Okay, I know where to use spark plug. I know where to use mctd. Five, but I'm struggling on using part 14 Where to use part 14. And I think a lot of this question is, these a lot of these questions in here are related to well, when do I do it one way as opposed to doing it another your your thoughts, and without you have, you know, you don't need to be a part of the holy war. It's my job to run my mouth on that stuff. But the free, as you see, whereas part 14 appropriate Arlen like obviously, putting using OPC way, part 14 in a device in a PLC makes a lot of sense. But when I look at sparkplug and compare it to part 14, I'm like, unless I want to transmit, like Node attributes, or I want to use the OPC UA information model. I don't know why I would ever I would use part 14 If I'm not transmitting those things, is that
Speaker 2 52:12
you know, to me, you know, even it's funny, I have asked, and I have a lot of oil and gas customers. I had tons of customers using OPC UA and I'm, you know, I'll bring up Todd Anslinger sits on the sparkplug working group, and he's the IoT executive with Chevron. And they go, Todd, do you guys use the OPC UA companion model? And it goes, What's that? And Exxon or nobody even knows that it's out there to begin with. So in Nobody that I know is using it and then you've got says me trying to do some more on kind of on top of that, and I don't even kind of know what they're doing.
Speaker 1 52:56
Well, they're creating their profile open profiles that they want us to, but there's definitely there's overlaps between those profiles and UA information models like there's significant overlap and I know says me and OPC UA are working or OPC Foundation are working together on that. Let me let me ask you this. When you're working with MQ, TT right, you guys hi bytes supports flat MQ TT sparkplug. Do you guys parse like natively parse part 14 right now? No. Okay. So you don't have any?
Speaker 4 53:30
We don't have any customers that have requested that? No. Okay. So
Speaker 1 53:33
when you're looking when you're when you guys are implementing MQ TT, right. I mean, I know that for us, the number one function of high byte to start is protocol conversion. I mean, it's like it's like literally, number one, it's the first thing that anybody is doing with it. What are you seeing people applications that people are using high byte, and MQ, TT and or sparkplug be together a spark plug together? What what are you what are you seeing as the primary applications that MQ TT is being used for? Yeah,
Speaker 4 54:03
so you'll see the UAT MQ TT conversion. So you'll see that to get into uns right and then they'll either have a factory uns, enterprise uns and tunnels, you'll see them as a way of getting the real time data up through their their infrastructure, up to the cloud, you'll see that a lot. We are used a lot to take spark plug B, downstream coming from devices and convert it to a MQ TT JSON, or into other other formats. What is this thing? And you mentioned the information model. Thanks. So we just recently added information so we can pull structured data out of UA now, right so we, it's pretty, it's pretty complex. We only really see it and people that are like programming their own PLCs and they find this they're like, Oh, I'm going to create you know, basically udts or information models, or we'll see an oil and gas for like a service vendor. We'll go do it and create like structure types inside the PLC address space. But what is interesting is we can take those right and convert them to spark plug B template definitions and publish those out and it is like spark plug is simpler. So it is like very complex to a simpler format. But it's all you really need, right? That's in, in Spark blog today. And the end result is you can like, you know, URL, you can send that to snowflake, you can send that to site wise, you can carry those data definitions around. And there's a lot of usefulness in that. I mean, it's pretty cool when you connect to the PLC, convert a spark plug send up to like snowflake, and Arlen has this like, and bam that you got a table. That's the definition that came from the PLC. And it's like, Okay, that's pretty slick, not only the data came, but the semantics around that data came with it, too. Yeah.
Speaker 3 55:31
And what Aaron just outlined there is the whole, why do you choose one versus the other about sparkplug? MQ TT, just vanilla JSON, or an OPC, UA PubSub, MQ TT JSON? Whatever you choose, that's going to be whatever the consumer can understand. Now, the whole point of those protocols, at least where we're at now is to eliminate all the integration, you need to go manually parse the payload. I mean, that's part of the beauty that sparkplug has, when you say connect a spark plug device into a SCADA or a historian, you don't have to parse the payload, it just natively understands oh, here's the data's, here's how it's structured. And here's all the data types, and then all the metadata that's associated with it. And then any templates that are created or udts associated with that. If you know, when you use native MQ, TT JSON, you have to manually parse through that and you're at the mercy of whatever the publisher chose to structure their data. It can be they did it in a very good way. They could be crap. They have to learn at the mercy of it. Yeah. So
Speaker 1 56:41
you have to learn it, because you're really writing a decoder, a parser. Who is that's a decoder. So you have to know how it was. It's not encoded, but you have to know how it was built and structured in order for you to unstructured. Yeah,
Speaker 3 56:52
who knows, I mean, maybe they use JSON, maybe they use XML, maybe they use organ to display images. And you got to decode that in a binary format. So but but the point is that you select whatever your consumer can use any ideas that you would use the one that you can natively consume. So then you can pass it through to other consumers, and the format that they understand like snowflake, or AWS, or BigQuery, or whatever. I
Unknown Speaker 57:17
have a go ahead here and sorry, no, it's
Speaker 4 57:20
just a super important point that John If John was on here from it would say this, like think about the consumer. I have seen so many projects where like early in this like two years ago sparkplug was new, right? This is like a large pharma. They created they were going through VPNs. They had like uns to uns to uns all the way up to Azure, then into Azure Data Explorer sparkplug e format. They didn't even decode it, just raw data. And then we got it all to work. It's like now what are you going to do with it? Crickets? And it was like a six month project that went to nowhere. Yeah. And we,
Speaker 2 57:48
and I hate that error. Because we have created so many data swamps. It's almost embarrassing.
Speaker 1 57:55
It's it? Well, here. Here's this this goes to the next question. So Lachlan asked, What are your thoughts on the evolution of MQ? TT, to the point where enterprises are using a broker as a host for a unified namespace and single source of truth for their entire business? He says operation and and I would correct it. I rarely see one broker is the host i It's generally distributed, either vertically or horizontally across many, it's more important to think of it as like a fabric as opposed to like a single broker as your single failure point. He says, If MQ TT at the end of the day is a messaging protocol, is it being taken too far to be used in this way? Or is this what was it was designed for all along? It's kind of a false choice there. But
Speaker 2 58:43
you know, from my perspective, and again, Aaron, Aaron brought it up in I think we all have, we've all worked with Amazon we've all worked with with Azure. But, and Walker, I think you and I had a discussion offline on this. I've found that snowflake becomes the ultimate kind of like uns single source of truth. What we did with snowflake is we didn't do anything. That was the beauty of it. We didn't create an IoT service. We took spark plug in to everyone's point. When it goes through snow pipe streaming, it gets converted to JSON. But all the spark plugs schema is there. We build it in snowflake and literally your uns now I have a place to go not only for my real time day that if you use in Qt t as your uns and you went to the broker, the best case is you're going to get last known good. I want to go someplace where I still have my sparkplug structure, but I've also got the history behind it forever if you're using snowflake. So I really like the fact that snowflake kind of sits on top of AWS and said on top of Have your data set on top of Google Cloud Platform. But I can create a sparkplug centric database that now if you know SQL, go knock yourself out. Yeah, we're
Speaker 1 1:00:12
using snowflake now for what we used to use Kafka, a data warehouse and an analytics layer for now, we're in snowflake for it natively. And it keeps the schema intact. Like it is literally snowflake on top of UNM, S is it literally is the platform for commoditizing. The data, there's so much of our architectures, we've literally stripped out. And now it's just snow pipe into snowflake from the the topical namespace
Speaker 3 1:00:43
analogy I like to use here. And this is the evolution that we're seeing as we use mctd More, we went from serial connections, which are very point the point, then we went to Ethernet switches, that allows us to create local area networks that allow multiple devices to talk to one another, to Ethernet routers, that now I have a single endpoint and a router, and I can access the world. So we have that similar with point a point, you know, that's just your standard connection to whatever the device is, and everything's gonna go into it. MQ TT brokers are those Ethernet switches, those are allowing the data to flow through it. And that's really all it's doing is allowing the data to flow. But it's not really meant to, you know, manipulate the data or whatnot, that's where we need these routers, data routers, which is it's a single point for me to go to, and I can access any data and the format that I need. And so that's where we're seeing this idea of snowflake or other technologies that once you get the data there through these Ethernet switches, or MQ, TT brokers up to that point. Now I've got my default gateway, which is that snowflake instance. And any consumer can go to that and get the data in the way that they need it in the format that they need it. And presumably, in any number of protocols, it could be HTTP, it could be MQ, tt, for goodness sakes, it can be for whatever. So that's kind of the evolution that we're going through. And industry
Speaker 1 1:02:13
forbort asked why why Quality of Service zero? Why not allowed choice of quality of service? So why does the sparkplug Working Group seem to insist on staying the course on QoS level set to zero? Is a good question, a common question.
Speaker 2 1:02:29
Okay, so I'm setting in front of the SCADA pipeline operator. And, and this is a true story. So I had a bunch of IBM suits in there. And we were at Plains midstream. And Brian Engberg is the SCADA manager for planes midstream, our planes, all America. And he's a he's a really great guy. He's about my age, you know, straight shooter, and so he goes, Okay, so are you going to tell me how you're going to sync commands? And before I could even answer the question, the IBM guys oh, we'll put a quality of service to on that. We'll make sure it gets there. Wrong answer eventually, eventually. Yeah. Yeah, need to
Unknown Speaker 1:03:17
get there when it was executed or not at all.
Speaker 2 1:03:20
Exactly. So the reason, I am so adamant that we keep Quality of Service zero at the, at the SCADA level is a safety concern. I've had played with too many brokers, and I think you can all agree, you get start looking at quality of service. And man, things just keep, Hey, I can't get that out of there. And you know, still showing up the broker, we had it turned off for two months and commands are showing up. We can't have that. So I agree, we could start looking at quality of service when we get outside of that. And we are looking at that in the next version of the sparkplug spec. But at that control layer, we we only use quality of service zero, we have other mechanisms built in, like sequence numbers with spark plug, we have other mechanisms built in, that will guarantee that you're not losing data. But we're not using quality of service for that very reason is that I don't want command setpoint commands valve commands showing up at some point in the future.
Speaker 1 1:04:27
And it also keeps all that confirmation off the wire. So if you use QoS, Quality of Service zero, then you're attempting it once at that moment. If it doesn't work, you don't execute the command. Because it's play. If you push a start button and you want to start a motor, it either starts the motor when you push the start button, or it doesn't start the motor at all. That's what QoS zero is. Yes. Yeah.
Speaker 3 1:04:51
And that's a I can see where it's providing a false sense of security too. Because if they're saying oh, I got QoS to all you've guaranteed Is that the broker receives the command, really have no idea when the broker to even get to the device. So yeah, people think that they're getting maybe something better with QoS, too. But in the end, maybe not everybody
Speaker 2 1:05:13
is what people don't realize is that every SCADA system that I know of going all the way back to Oasis DNA, is that when you build in your control points, when you build in your set points, there's a command disagree. So if I send a valve open, in any SCADA system, I'm going to have a command disagree time window. And if that Valve doesn't go from closed in transit, within a number of seconds, you're gonna get a alarm on your SCADA systems that hey, that command didn't go through and we take corrective action.
Speaker 1 1:05:49
Here's a common question. It came up multiple times. And people want you to comment on whether data integrity is an area of concern or interest. So this question has come up multiple times. So Michael Byron says another item, the data integrity of the protocols, how do you ensure what sent is received in the UNs hours data in transit is encrypted, out of the devices authenticate to the UNs? And how on on authenticated devices are prohibited? But in terms of data integrity, how do you ensure what St gets published and received? is in the US? Well, I mean, McCormick says is, is it an area of concern?
Speaker 2 1:06:30
Well, at the at the pure MQ TT level, that's why we pick TCP IP. I mean, again, Andy and I were really pressured to go with UDP. Oh, Ireland, and Andy, it's so much more efficient. And we would have ended up writing our own bad version of TCP to begin with. So we're letting it do all the assembly, the reassembly all that. And we trust that TCP IP, if we send a piece of data over TCP IP, it's not going to get mangled everything, all the checksums are inside of TCP IP. Now, we're getting that data from equipment, we're getting it over protocols, Allen Bradley protocols, or Modbus. We're getting it over OPC UA, and we're trusting that we're getting the right value there. There's nothing we're going to do at the protocol level to say, Oh, well, that OPC UA tag, it came from an Allen Bradley PLC, how are you going to check it? Right? So I'm trusting that we've got a very reliable way of getting information out to the consumers that and as far as its own internal security, I think all of the checks and balances are there.
Speaker 4 1:07:44
I think one thing I see that is sometimes a red flag is people will create tunnels through MQ TT, right? Where it's not one producer, multiple consumers. It's like one producer, one consumer. And I want to know when that consumer goes down, so I can start to store and forward data. And you can do it right, you can look at the death certificate of the consumer. And it is a pattern but anytime you see it, you really have started to question like, am I using the technology correctly? A playful point
Speaker 1 1:08:08
the point? Yeah. Right. We don't we don't the rule in unified namespace is not that everything interoperate through the infrastructure, it's we are removing most point to point connections, because most point point connections are designed to collect events, and turn those events into something that's and so as long as we're collecting an event, we want to do it through a through the the IoT infrastructure. But there are lots and lots of applications, maybe 20% of the applications are still point to point, the rule is, is that if you create any new data or information, one or both of those nodes has to publish it into the UNs so that it's accessible by other consumers. But that doesn't mean that the the work that the application itself took place through the broker. And I agree with you, I see lots of applications where people are trying to do things through an MQ TT broker where it's much more appropriate that you just go through the native connector between those two. mean there's there are things built into the connector that you would have to rebuild inside of a namespace all if we've
Speaker 4 1:09:16
got 10 years of PI data and we're going to route that through hive MQ and then up to snowflake, and they're like, what's the match Max Max packet size and the MTD message? It's like you shouldn't even have to ask that question. Like you're it's 256 Meg, but you're already using it. Yeah,
Speaker 1 1:09:31
those of you who are not systems engineers or network engineers, you know TCP builds in into it into the into transport the the reliability, so you know, the fundamental different, you know, TCP is a deterministic, it, there is determinism in TCP, as opposed to in UDP where you're just throwing the message out and you don't, there's no reliability check in UDP. So they're offloading a lot of it to the To TCP, in
Speaker 4 1:10:01
fact, OPC UA does not rely on that. And you have to do that yourself and code and it's terrible coder rights. And
Speaker 1 1:10:08
I've never understood why, again, I've never understood why that's there when you can achieve it. Yeah, I never understood why because it's not required that it'd be there. It's again, it's an optional. Well,
Speaker 2 1:10:20
and I was reading in after talking to you, Walker, I was reading the part 14 spec and right at the beginning, or whatever, it says, Well, you can use part 14 for UDP, TCP, MQ TT, and I'm going whoa, wait a minute. I haven't finished reading this right there. I'm kind of lost.
Unknown Speaker 1:10:42
Ethernet, straight on the wire.
Unknown Speaker 1:10:46
Go ahead. Matt, you got a question?
Speaker 3 1:10:48
Well, just Matt McCormick was asking about why use QoS or sequence numbers and sparkplug TCP can guaranteed delivery, but packets may still come out of order as it's trying to re establish retransmissions. So the sequence numbers help you rebuild the order that it should have come in. On the other side. Yeah,
Speaker 2 1:11:08
the SCADA side, we you know, there was some misunderstanding in no ambiguity in the in the original MQ TT spec. And I had always assumed that you were guaranteed in order delivery from a producer to the consumer of the messages. And some brokers that that may not be the case, especially when you got start going through load balancers. And you can have one message arrive out of water. Actually, AWS IoT Core with Amazon puts load balancers in front of it. So you can get messages out of water. Though with the spark plug spec, we give you a window to before you pass that data to a SCADA system is to reorder those. Because if you don't do that, you don't want data arriving out of order. In other words, if you sent a valve command, and you saw go close before you saw in transit, the operator would have a heart attack. So we've got to make sure that at the receiving application that those messages do arrive in order,
Speaker 1 1:12:17
that check has to take place before the before the execution of the Right. Right, exactly the sequencing before. So what I want to do is I'm going to ask one other question from O'Donovan and then I'm going to turn it over to Aaron and, Matt, what would you guys like to see? Either in the sparkplug spec, in a new spec or in the MTD spec? So but what would you like to see change going forward? And then we'll do our call to action at the end, but Oh, Donovan, I actually really liked Mark's question here, which is what do most people not realize about MQ? Tt that they would really benefit from knowing this is a great question. So Arland what it what is the thing that most people don't realize about MQ TT, that they would really bad really benefit from knowing?
Speaker 2 1:13:01
Well, the first thing is that I have for a lot for all kinds of applications. You can imagine how many things are using MQ TT that you never didn't know about, and I get calls all over the place but a lot of is weird. They read the MQ TT spec and they go oh, you can publish information describe great, but nobody bothers to read about the last will and testament. And so 80% of the applications out there are using MQ, etc, across all different markets. They're not even using the last will and testament capabilities of MQ GT. The other thing I don't think a lot of people realize is the zero security model of the MQ TT. But what I mean is that, if you think classic, I've got a SCADA system, I've got a plant control system, and I must make outbound connections. And I have to know the ports and IP addresses of everything I want to talk to. And now I have to protect all that. Now let's flip it on its head. I've got an MQ TT server, all my equipment connects in to a single port port 8883. And that's the only port that I have to protect, right a huge benefit from a secure anything you're going to put together. There's nothing more secure than an MQ TT infrastructure. Done Right.
Speaker 1 1:14:29
And I'm and imagine that only a handful of those clients that publish and so we oftentimes were putting the broker in the DMZ and or there's a broker in the DMC and then you will have it let's say I've got 12 MQ TT clients, all in a circle around that broker, but only three of them are kelunte coming from L one and L two. There is no inbound port open to that client that it's outbound to 8083 in the DMZ, and then all of the other application These are inbound 288 83 There's nothing inbound open to the on the plant floor on the edge at all. You're instantiating that connection from the edge. And when you talk about inherent security, the reason you know it's inherently safe, is you're not, you're not sit filling out any requests for it to open any ports. When you're when you're doing these applications, you realize, oh, they already have it unlocked down, and they're still good to go. You know? All right, Aaron, and Matt, what's missing? What do you want Ireland? would use his influence to change about back? Or the or the spark plugs back? Let's start with. We'll go with Aaron first.
Speaker 4 1:15:44
Yeah, I'll pick on pick on Spark Plug in again, I'll pick on it knowing Arlen exactly where it came from and everything you said that made sense. But I think for me that like having a native JSON encoding for to make it easier for people to use it and discover it, I think would go a long way, like a way to switch that. I did see like I looked at some of the spark plugs, the spark plugs, see enhancements. And I realized, like we changed the root of the topic, every time there's a new version. It's svbc, which seems trivial. That means I have to produce a software update to play with anything that sparkplug see. I don't know. But that problem, but
Speaker 2 1:16:15
well, but I will tell you the mistake that I made with Spark Plugs Z, right? was we had all we had 1000 those out there for Phillips 66. And we wanted to add a feature to it. But we didn't have that that was placed there. Right. And so it was impossible to just go make incremental changes. Whereas with we can go from B to C to D, you're gonna see that right when you subscribe and go look, I don't know anything about See, that's okay, you can ignore it.
Speaker 1 1:16:53
Maybe Yeah, what do you say? What do you say? I get where it comes from? Yeah, no, I do, too. Yeah. But the problem is, is that upgrading to a new version of the standard will also, you need to take into account that as I'm upgrading to a new vert newer version of the spark plug standard. Everything I've built is going to end up in a completely different root node in a in the broker, and you have to you have to architect to account for that I need to be able to point I have to point out my pointers got to look in a different place based on which version of the spec I'm
Speaker 2 1:17:31
that's true. But I and I would love to say there was a magic bullet for that. But it really gets, I
Speaker 1 1:17:38
would much rather I don't know, the ability to filter on that root node tells you which, which, which part of the spec is used. And, and that metadata doesn't exist anywhere else. Like you, you wouldn't, you can't consume it somewhere else. So you would be like, you wouldn't know there's no way to tell how it's which standard was used to built to build that edge network. Right.
Speaker 3 1:18:02
I think along those lines, top of the mind is an improvement for spark plugs specifically is right now it's a very rigid structure of topic paths. So you got your you guys just talked about the first encoded, right. And then you got your group ID, which is next, you got your verb, which is like n data, or D data, and you got your node ID device ID, I think convert into a more flexible topic path. So for example, is there really any reason to not allow someone to prepend topics before that route? SPB V. 1.0? Does anyone really care, because when you set up a consumer, you have to still point it to something of this is I'm connecting to this broker. And here's the topic that I got to start with long nose, I got to parse through this topic scheme to hit my root node. And then from there, I know I'm in sparkplug mode, downstream of that. So anything that adds
Speaker 1 1:19:06
any of the complaints you were using, wouldn't just need an attribute, which would be path to the protocol. That's
Speaker 3 1:19:13
right. But they need that already. Because they gotta know what is my group ID? What is my node ID, it would just be a third variable of what is your topic path route?
Speaker 2 1:19:25
Yeah, I know, I kind of liked that, Matthew. But again, I mean, with Spark with no pre printed topic on there, you don't have to know group, no device. I mean, you can do a wildcard on SPB. And you'll get all that and auto discover it. And if you prepended that, then you would have to know about that to start with. Well, I guess you wouldn't. You still could do a wildcard, you still get your prepended topic space and then you hit the oh, there's sparkplug B or C or whatever. And now I can go down And from there, but at least I've got that pre pinned topic to the front of it. I do kind of like that.
Speaker 3 1:20:05
Because if I'm, if I'm a device, you could create your uns kind of hierarchical structure that way, where you're prepending as your plant, you know, site area line ahead of sparkplug. And then here's where the spark plug namespace takes place for that. You don't have to worry about overstepping each other because you could have multiple sparkplug namespaces in parallel, on the same broker. Because you've differentiated, you segmented it out with your pre printed topic structure. Because
Speaker 1 1:20:31
right now, the namespace parsers, just look for SPB, SPB,
Speaker 3 1:20:37
V 1.0, or whatever, right as the root node, all it just becomes another piece of configuration you add, when you set up your, as an architecture, we're set this up
Speaker 1 1:20:48
like that. The beauty of that is, by the way, that's kind of I never even occurred to me, I just go ahead and no references. But the other advantages is now you don't have to like one of the one of the things that we do right now is, as you move up the stack, a unified namespace becomes more, you're much more likely to be looking at flat, MQ TT five as you're at the enterprise level, and sparkplug. namespaces are subsets of the actual enterprise unified namespace, right? Now, what you have to do is you have to use a tool to to pull the sparkplug namespace into a flat namespace, right, you have to use an external tool to do that. But if you had the this ability, you could the spark plug namespace would land in the you could actually map it into the UNM s just on the connection itself. Right, which we don't do right now. Right now we bring it in as a spark plug. And then we reference it indirectly, to get it into the UNs. Aaron, what about anything else from you that you you'd like to see
Speaker 4 1:21:53
MQ TT, I would love to see have smart subscriptions, like some way for me to say I just want to describe the topic, like I want to subscribe to this topic with a little bit of logic, like deadband would be the simplest one, equivalent to OPC. But like, I want to subscribe this topic and let me know when this particular attribute changes,
Speaker 3 1:22:10
and filters on the subscription, some
Unknown Speaker 1:22:14
simple way from the client side of control. What I get back, I
Speaker 1 1:22:19
mean, right now, all these things we can build ourselves. But each person who builds them is building them non standardized, and they we can't share, right? Between each other racket, I can write a method, I can put a method on a topic, I can do that. I just have to have it either in my client monitoring that topic. And then if that command comes through, I can do something with it. But uh, you're you're just building not, there's no standard telling you how to build it. So you, it's just the Wild West.
Speaker 4 1:22:48
Yeah. The other problem is, when you're trying to do stuff like that Walker, you'll see design patterns where like, data will come out of the broker and the back end of the broker and out and back in. And like if you do that a few times, like something breaks, and suddenly all these topics go bad, and you can't figure out what that will happen like it does, you can create a mess pretty quick.
Unknown Speaker 1:23:06
I like that.
Speaker 1 1:23:08
Marco Donovan said I'd rather have to put the device and node in the payload and get total flexibility on the topic path. You could add the spark plug version into that payload better if it was in a metadata portion of the payload. I the only the only issue there, Mark is you that means there's no semantic organization of an edge of network node if you do that.
Speaker 3 1:23:38
Requires subscription to figure out whether you can even understand the message yet or not included in a topic path, you immediately know, I'm not going to be able to decode this, right?
Speaker 1 1:23:50
You can just skip over it. ARLEN any anything you want to report before we, we take it home? Anything else you want. I'm not
Speaker 2 1:24:00
really this has been great. I've enjoyed it. I appreciate the I appreciate the opportunity to talk about it. You know, every day I'm blown away. You know, I think back Andy and I both think back when we go, I can remember I said oh, we're gonna have we're gonna put a big broker in the cloud. And literally, this is before cloud computing, right? And everything, everything will publish to that. And then we'll be able to have that quote the unified namespace, which which was interesting. But no, really, people have used it. The community that we built around, it is awesome. This whole group that you've got are awesome. And that's why i Here I am 65 years old, and I still love it. So it's a great opportunity. This community is great. I love working with everybody.
Speaker 1 1:24:48
Awesome. What I'd like to do in the future then is I think a really valuable conversation would be to have you come back and we talk. We spend the bulk of our time talking about it. snowflake and how you see it as a. I mean, we see it the same way, in terms of how much of a game changer it is in terms of achieving many of the ends that the whole reason that we're trying to acquire data and what we want to do with it, snowflake. Yeah,
Speaker 2 1:25:16
it was interesting. We said that right? We were both driving to Walmart at 55 miles an hour. And we didn't know it. And we both got there. And then we had a conversation going, Oh, gee, you know, I should have talked to you sooner. And we let it
Speaker 1 1:25:29
we live literally half our conversation was just talking about snowflake, and we're on the same page in terms of what its its implications are Aaron, anything you want to say anything you want to chime in on before we take it home? And then I'll know Arlen,
Speaker 4 1:25:43
it's been an honor, I think back like 10 years ago when I met you at that conference, and ever since. Yeah, totally. Just keep it up. Keep going, please. I think sparkplug has been I mean, it's been amazing. When I joined, highlight a few years back to see sparkplug grow, to where it is and be like, what I pictured is like a really grassroots standard, right, which is refreshing. And yeah, totally. Yeah. Thank you. So thank you.
Unknown Speaker 1:26:03
I appreciate that. Matt,
Unknown Speaker 1:26:05
you. Yeah,
Speaker 3 1:26:07
I really like the agile approach, get it into the hands of the users, get that feedback, and then improve and iterate, don't let it be a waterfall where it's on paper, try to convince the market to do it. And you struggle there. And then you haven't even seen some of the pain points of when you deploy these technologies. So keeping that tight iteration on a standard is fantastic.
Speaker 1 1:26:29
And perfect is the enemy of progress, right? Is that what it is perfect is the enemy of progress? Yeah, it is. And
Speaker 2 1:26:35
remember, I just I do want to mention one thing here, Edge network computing is great. And we're having all these discussions around, you know, I've got my edge and all that. But remember, my goal, my vision is that all my equipment is already talking sparkplug. Right? So I go install my factory, and my factory just comes up. That's what I want, you know, you should be able, we, it's crazy, that we we as engineers, and 2020 for half the Connect protocols to machines, when we've got plenty of enough compute power to just set up a broker in our factory, our machines connect, they tell us what they can do. We organize that. And now we're just up and running. And oh, by the way, if a bulldozer comes in, runs over control system, that's fine. Put another one in, let him relearn it, and we're back up and running. That's where we've got to be
Speaker 1 1:27:33
headed. Amen. So full. It's wholly self aware based on infrastructure. Exactly. Yeah.
Speaker 3 1:27:39
If you want to harebrained idea, how about instead of No, like statically assigning what topic to publish to you something like DHCP type concept where a client connects and I miss Who should I be publishing to? And then it just starts to do that. So even less, I mean, all these concepts are available in it. There's no reason why we can't do the same on the data side.
Unknown Speaker 1:28:02
I agree. Awesome.
Speaker 1 1:28:04
Alright guys, I appreciate you joining us. We went way over but hopefully it was valuable. Thank you guys for watching. Questions for Ireland. Put them down in the comments below. Like, subscribe, comment down below. We'll see you in the next one.
Transcribed by https://otter.ai