Problems with CORE

Don’t let the old age put your spirits down, Major.

We just might keep changing the architecture.

While it might not be possible on Microsoft Windows with multiple sockets, why don’t we use a single one.

The way I see it, is to increase the coupling between elements even further at the cost of isolation guarantees.

In short - reuse the very socket used by Kademia for outgoing UDT datagrams; we’ve been doing similar thing for incoming Kademlia traffic already, just the other way around.

use netstat -bano to check for listening ports on a Windows machine.

If any datagrams managed to get through you would then see the other endpoint next to the entry:

I’ve been playing around a lot with netcat to reproduce the situation.

If anyone else would like to join in.
On Microsoft windows download version 7.92 (since 7.93 has troubles on Windows with OpenSSL) of NMap.

  • to start an UDP server:
    ncat -u -l 443

  • to connect as client
    ncat -u localhost 443

Type text either at client or server and hit Enter you should see it at the other end.

that pretty much reproduces the situation.

Both me and @Alpacalypse are using T-Mobile 4G Internet, in Poland.

I have a business tier with a public IP, no NAT, @Alpacalypse is behind a NAT.

Even though my node is able to synchronize just fine and I experience no issues at all, I’ve noticed something strange…

From some locations (including our bootstrap nodes) T-Mobile would not allow direct connections to my public IP address at port 443 and from some locations it would ALLOW.

It would allow for connections to any imaginable port (including port 1) BUT … not to port 443. It is possible that things get even more messy when someone is behind their NAT. And this MIGHT be due to the fact that they do not allow for servers within their EULA (and 443 is well, most often used by these).

But then this does not explain why from some remote endpoints they pass connections through and from some they don’t.

haven’t you stumbled upon this?

When nc is listening to a UDP socket, it ‘locks on’ to the source port and source IP of the first packet it receives.

Actually I did… no issues with connectivity from bootstrap nodes… it was just netcat was already bound to other another peer…not a good behavior if you ask me… binding to a client in a state-less protocol…

datagrams are clearly visible in Wireshark though.

Translation for others: netcat would be showing data only from the very first client. Which also explains why the socket gets ‘connected to’ the client address as shown by netstat - that’s not normal for UDP sockets.

I’ll now go ahead and test things on TMobile via NAT…

I’ve spent the entire day researching NAT traversals on T-Mobile connections behind NAT
here

woah… no wonder why no commits to code-repository from you today :laughing:

Never saw hole-punching carried out in such an experimental fashion from a Linux/Windows command line… probably I could incorporate that to my classes…

Anyway! I’ll take it from here and check whether responses from Core are dispatched to the port number datagrams were received from, as after reading through your report I can clearly see that that could be the issue…

don’t tell me you’ve hard-coded the reply-to port number…

Your work did not go to waste my friend.

I can confirm that round-trip datagrams were not being dispatched to the appropriate port number, as suggested by your experimental findings.

There was a lot happening as we’ve been introducing the protocol multiplexing mechanism.

Information about the source port number of the received datagrams is currently being omitted from the data structure representing endpoints, during their conversions, between sub-systems. The default port number (443) is being used instead. I’ve created a task with high-priority, pending resolution within 72 hours.

I can confirm that the current (now updated) main branch can traverse NATs just fine in a scenario where one of the two communicating parties is behind a NAT, that’s mostly thanks to research performed by @vega4 .

Glad to be of help.

I’ve done some more digging into con NATs, symmetric NATs etc. Basically had we to accommodate all cases (i.e. two users behind NATs doing source port randomization on each new target IP) - then we would need to make data packets pass computers of users with public IP addresses.

One might say that we are the ones having unprecedented technology and research background making such a precedence game-theoretically fair (as some of us have conceived a communication protocol incentivizing intermediaries on per-byte basis), I say this is not needed right now and we should focus on other things, @TheOldWizard ?

I took a look at how Skype does this. Well, Skype would use computers of those who own public IPs to to let those who don’t, about their IPs (NATs’ IP and their ports) - yeah they rely on the community to serve itself.
It approach goes further. If two Skype users are behind NATs doing source port randomization, Skype would use others’ (those who own public IPs) bandwidth and CPU processing power to act as intermediaries (not rewarding them in any way, and not letting them know, of course).

Nice findings. It showcases how Big Boys (CEOs) squeeze every drop out of the herd, making the herd function all by itself while they (CEOs) collect money.

It’s a good tactic, as long as one lets people know and as long as one rewards them accordingly. Companies usually tend to turn a blind eye to the second part. To them it’s okay as long as everything works, at least for the most of it, as long as it looks good and atop of everything - as long as the operational costs are kept at their minimum.

We, presumably being those ‘good fellas’ offering ‘decentralization’, we have no other choice but to make the community run everything all by itself, as well, - as we do not want to impose any points of trust.

Here, our ‘selling point’ of this strategy is not minimization of costs, but improvement of decentralization guarantees.

STILL we shall make everything be well informed of and we shall incentivize all the parties involved and make such a functionality optional.

I say the crypto-incentivized TUN/TURN functionality is worthy of a research paper on the grounds of its own @CodesInChaos.

I agree. We shall return to this topic once the community grows and once more people find themselves in need.

Boys, just a cold shower for you all, neither Bitcoin nor Ethereum even cares about implementing such things.

They rely on their node operators either having public IPs or taking care of NAT entries manually.

Appreciate your passion but we definitely are better off dodging the bullet.

Hats off to @PauliX - our new Community Expectations’ Manager !

1 Like

Damn guys 'n gals, you did some very clever stuff in regard of my messed up net. Appreciate that. @PauliX might be right to some extent tho, Operators should be knowledgeable enough to set up stuff on their side, not expect you all to do all the heavy work for them. It is still fabulous work :slight_smile:

1 Like