Proactive Resilience: Enhancing GRIDNET Core in Response to Unconventional Mining Practices

Dear GRIDNET Community dear Operators!

We have encountered an unusual situation on our test-net. From block 14891 onwards, a group of miners using an obsolete version of the GRIDNET Core software have been producing blocks. The cumulative proof of work from these miners resulted in an invalid, yet weighty, chain of events.

As a result, without intervention from our team in the form of a ‘checkpoint’, it would have been virtually impossible for other nodes to synchronize with these miners. This issue left them stranded, unable to keep pace with the rest of the network.

When facing this kind of predicament, there are two ways to respond:

  1. Insert a checkpoint in the chain.
  2. Make GRIDNET Core more resilient and intelligent.

Unsurprisingly, we have opted for the second approach. Our team is working on creating heuristics that will prevent powerful miners from overwhelming the network with heavy yet invalid chains of events, as seen from the perspective of the current version of the software.

These heuristics will identify an invalid block, make a cut in the locally proclaimed Heaviest Chain Proof, and, most crucially, ‘vaccinate’ itself against recognizing such a Heaviest Chain Proof in the future when partial chain proofs from other peers are delivered.

This exciting development is currently being implemented live on our YouTube channel. We believe these measures will not only resolve the current situation, but also significantly strengthen the robustness of the GRIDNET network.

We are committed to transparency, and we appreciate your understanding and patience as we continue to enhance GRIDNET Core. Thank you for being a part of this journey.

Best regards,
The GRIDNET Team

1 Like
//Block Blacklisting - BEGIN

// Insertion of a block identifier to the black list
bool CBlockchainManager::blacklistBlock(const std::vector<uint8_t>& blockIdentifier) {
	std::shared_ptr<CTools> tools = getTools();
	tools->logEvent("Blacklisting block " + tools->base58CheckEncode(blockIdentifier), eLogEntryCategory::VM,100,
		eLogEntryType::warning,eColor::cyborgBlood);

	std::lock_guard<std::mutex> lock(mBlacklistedBlocksGuardian);
	auto result = mBlacklistedBlocks.insert(blockIdentifier);
	return result.second; // returns true if insertion took place, false if the element already existed.
}

// Query a block identifier from the black list
bool CBlockchainManager::isBlacklisted(const std::vector<uint8_t>& blockIdentifier) {
	std::lock_guard<std::mutex> lock(mBlacklistedBlocksGuardian);
	return mBlacklistedBlocks.find(blockIdentifier) != mBlacklistedBlocks.end();
}

// Removal of a block identifier from the black list
void CBlockchainManager::unblacklistBlock(const std::vector<uint8_t>& blockIdentifier) {
	std::lock_guard<std::mutex> lock(mBlacklistedBlocksGuardian);
	mBlacklistedBlocks.erase(blockIdentifier);
}
//Block Blacklisting - END

Addition during the Flow mechanics (while processing of blocks), if node is unable to synchronize with the state of the current (!) block it would blacklist the block and make a cut in the Heaviest Chain Proof.


if (parent)
				{	//Situation described in: https://talk.gridnet.org/t/proactive-resilience-enhancing-gridnet-core-in-response-to-unconventional-mining-practices/240

					//1) black-list the  block.
					blacklistBlock(blockID);

					//2) make a cut in the Heaviest Chain Proof
					//that is under the assumption that we start the Flow only for blocks present in the Heaviest Chain-Proof.
					// and that only these can be found to have inconsistent final perspective.

					mHeaviestPathGuardian.lock();
					uint64_t bh = block->getHeader()->getHeight();
					if (mHeaviestChainProof.size() && bh < mHeaviestChainProof.size())
					{
						tools->writeLine("Attempting to making a cut in the Heaviest-Chain Proof to remove block " + tools->base58CheckEncode(blockID), eColor::lightPink);

						if (tools->compareByteVectors(mHeaviestPath[bh], blockID))
						{
							mHeaviestChainProof.erase(mHeaviestChainProof.begin() + bh, mHeaviestChainProof.end());
							mHeaviestChainProof.erase(mHeaviestPath.begin() + bh, mHeaviestPath.end());
							tools->writeLine("Heaviest Chain Proof was altered.", eColor::orange);

						}
						else
						{
							tools->writeLine("The block was not found at the expected position while altering the Heaviest Chain Proof. " + tools->base58CheckEncode(blockID), eColor::cyborgBlood);

						}
					}
					mHeaviestPathGuardian.unlock();
				

				}

additions during the Chain-Proof verification mechanics:

if (flashCurrentCount)
			{
				progress = (int)(((double)i / (double)chainProof.size()) * 100);

				getTools()->flashLine("Verified " + std::to_string(i) + " Block Headers already ( " + std::to_string(progress) + " % )");
			}
			previousHeaderHash = mCryptoFactory->getSHA2_256Vec(chainProof[i]);

			if (isBlacklisted(previousHeaderHash))
			{
				return false;
			}

As partial chain-proofs are delivered it would be checked whether blocks are black-listed. Spamming with black-listed blocks would result in remote peer getting banned autonomously.

In addition, I’ve added additional heuristics which aim to cope with a situation in which there is a data-integrity error and Core is unable to arrive at a world-view proclaimed by prior leader and there is no checkpoint available.


if (!mFlowTransactionsManager->startFlow(initialPerspective))//let's begin the transaction flow. 
		{//By default the System Perspective needs to match the final Perspective reported by a PARENT block (if present).
		 //This may be overridden by a checkpoint (yet again, one 'covering' the previous block). 

			if (alternativePerspective.size() == 0)
			{
				if (alternativePerspective.size() == 0)
				{
					//serious data integrity error - we're unable to reach the 'parental perspective'.
					//it's either a data integrity error or the team needs to include a checkpoint due to a change in processing.
					tools->writeLine("Unable to reach the Parental Perspective.", eColor::cyborgBlood);

					//we will assume it's a data integrity error.

					//Try to fix things - BEGIN
					bool fixed = false;
					tools->writeLine("Assuming a data integrity error, attempting to fix...", eColor::orange);
					if (currentLeader)
					{
						uint64_t currentLeaderHeight = currentLeader->getHeader()->getHeight();

						if (block->getHeader()->getHeight() && (block->getHeader()->getHeight() - 1 == currentLeaderHeight))
						{
							std::lock_guard<std::recursive_mutex> lVP(mVerifiedPathGuardian);
							
							tools->writeLine("Attempting to remove current leader..", eColor::lightPink);
							

								if (tools->compareByteVectors(mVerifiedPath[currentLeaderHeight], blockID))
								{
									mVerifiedChainProof.erase(mVerifiedChainProof.begin() + currentLeaderHeight, mVerifiedChainProof.end());
									mVerifiedPath.erase(mVerifiedPath.begin() + currentLeaderHeight, mVerifiedPath.end());
									tools->writeLine("Verified Chain Proof was altered.", eColor::orange);

									tools->writeLine("Attempting to set Previous Leader..", eColor::lightPink);

									if (mVerifiedChainProof.size())
									{
										eBlockInstantiationResult::eBlockInstantiationResult ires;
										std::shared_ptr<CBlock>  previousLeader = getBlockByHash(mVerifiedPath[mVerifiedPath.size() - 1], ires, true);
										if (previousLeader)
										{
											if (setLeader(previousLeader))
											{
												fixed = true;
												tools->writeLine("Proclaimed previous leader.", eColor::orange);
											}
										}
										else
										{
											tools->writeLine("Unable to instantiate previous leader.", eColor::cyborgBlood);
										}
									}
									else
									{
										fixed = true;
										tools->writeLine("No leader available.", eColor::orange);
									}


								}
								else
								{
									tools->writeLine("The block was not found at the expected position while altering the Heaviest Chain Proof. " + tools->base58CheckEncode(blockID), eColor::cyborgBlood);

								}
							

						}
					}
					else
					{
						tools->writeLine("No leader available.", eColor::orange);
					}

					if (fixed)
					{
						tools->writeLine("Removed faulty leader..", eColor::orange);
					}
					else
					{
						tools->writeLine("Unable to fix..", eColor::cyborgBlood);
					}
					//Try to fix things - END

@vega4 so if there’s a ‘data-integrity error’ (although unlikely) - since Core verifies the Perspective after processing each of the blocks (by the end of the Flow), would it now remove the current Leader (block) ?

@CodesInChaos Yes, exactly. The state machine would travel back in time by a single block. All the state would be recalculated, the Verified Chain Proof would be shortened by a single block. From there normal processing would resume.

We have already thoroughly tested and validated these new additions to the consensus formation protocol.

An update together with release notes in scheduled for today.