It's too early to tell. Sometimes it goes for a few months, other times a few hours between issues. It's probably been a month since 2.2.1 had the issue, 2.2.2 has only been on my systems for a few days.
There is a lot of network topology here it seems, seeing as you are able to report on the issue while it happens, would it be possible for you to run a tool such as Fiddler to record the network traffic? That may at least reveal what is trying to talk to what...
The network has changed a bunch since this issue started. I moved most of the systems other than MP2 to different subnets on their own vlan's. I have one more vlan planned for the media portal systems just because of this issue. Also the topology has changed, MP2 server has a single 10Gb link to a Brocade ICX 6610, hopefully with SR-IOV soon. One MP2 client has 1Gb to the ICX 6610, the 2nd MP2 client is now wireless to a Dlink DAP 2660 which connects by 1Gb to a Dell Powerconnect 5524P then 2x 10Gb LAG back to the Brocade ICX 6610.
I'll take a look at Fiddler, I was planning some port mirroring going to an IDS but Fiddler may be quicker to setup till I get around to deploying the big guns.
WIP 135 x64 client has this issue too. I didn't see anything in fiddler, looks like it was monitoring http traffic but wireshark is pointing to MP2 UPnPServer. I disabled the network cards through windows several days ago after I detected the network issue hoping that the client would still be doing it's thing when I had time to look at it. When I checked the client today, it was still using a huge amount of CPU even with the network disabled. Enabling the network card resumed the 3,000-11,000 packets per second broadcast to SSDP 239.255.255.250.
Grabbed a bunch of screenshots from wireshark, process explorer and a few other places plus wireshark captures and client logs.
Attachments
ScreenShot535.jpg
104.9 KB
ScreenShot534.jpg
119.6 KB
ScreenShot533.jpg
96.5 KB
ScreenShot532.jpg
78.4 KB
ScreenShot531.jpg
26.6 KB
ScreenShot530.jpg
33.9 KB
ScreenShot529.jpg
66.8 KB
ScreenShot528.jpg
69.7 KB
ScreenShot527.jpg
62.8 KB
ScreenShot526.jpg
62.2 KB
ScreenShot525.jpg
24.9 KB
ScreenShot524.jpg
21.2 KB
ScreenShot523.jpg
108.4 KB
ScreenShot522.jpg
13 KB
ScreenShot521.jpg
13.8 KB
ScreenShot520.jpg
20.5 KB
ScreenShot519.jpg
15.8 KB
ScreenShot518.jpg
17.3 KB
ScreenShot517.jpg
12.1 KB
ScreenShot516.jpg
12.9 KB
ScreenShot515.jpg
34.4 KB
ScreenShot514.jpg
18.6 KB
ScreenShot513.jpg
34.3 KB
ScreenShot512.jpg
76.8 KB
ScreenShot511.jpg
55.7 KB
ScreenShot510.jpg
58.4 KB
ScreenShot509.jpg
34.8 KB
ScreenShot508.jpg
32.1 KB
ScreenShot507.jpg
18.2 KB
ScreenShot506.jpg
12.2 KB
ScreenShot537.jpg
132.4 KB
wireshark.7z
9.2 MB
MediaPortal2-Logs-2020-01-29-19.12.13 (client only WIP 135 SSDP issue).zip
That's an impressive amount of information provided
Had a quick look over and a couple of bits stick out, but I need to digest things a bit more and time is not on my side right now, please keep us updated and I will try to come back to this when I have some more time, unless you resolve before then
One thing I have noticed is the issue seems to happen when a MP2 client is left running without any interaction for a long period. I have a MP2 client only install on my main desktop for testing. I had MP2 minimized for a few days without using it and randomly while using the computer for other stuff all of my RDP sessions loose connection and I find the system broadcasing a bunch of traffic. That might give some clue on how to duplicate this. Leave a MP2 client open for a few weeks without using it and cross your fingers. I haven't found a specific trigger which makes this vary difficult to find.
I recently fixed a small bug when handling SSDP Notify messages. The "CONFIGID" value was once defined as "signed" and then later parsed as "unsigned". I now have seen negative CONFIGIDs that then failed to be parsed as unsigned. The message then was no longer handled.
But I have no idea if this has anything to do with this issue here, just wanted to mention it.