Increasing the number of pixel checks does not seem to improve the detection quality for me; I believe that the main problem is that certain images have intense shadows, which can trick the algorithm into detecting an edge. For one, the lower right area of the Commonwealth Fortress is almost completely dark and the Manticore has a large shadow on the right side. So, the algorithm would work ideally if the shadows on the images were much lighter. I also need to find a balance between the frequency of dark pixels on detailed areas and on the edges.
Also, I have been playing around with color channel swapping (i.e. using GRB as the RGB value to swap red with green). Here are a purple Asterion and a green Scorpion.
Here are all the images after basic edge detection
Here are all the images after a red-green swap