I am using OpenCV to triangulate the position of an object, and am trying to create some kind of formula to pass the coordinates that I obtain through to drag a pull arrow, casting a fishing rod. I tried using polynomial regression to a very high degree, but it is still inaccurate due to the regression not being able to take into account an (x,y) input to an (x,y) output, rather just an x input to x output etc. I have attached screenshots below for clarity, alongside my obtained formulas from the regression. Any help/ideas/suggestions would be appreciated, thanks.
Edit:
The xy coordinates are organized from the landing position to the position where the arrow was pulled to for the bobber to land there. This is because the fishing blob is the input, and the arrow pull end location comes from the blob location. I am using OpenCV to obtain the x,y coordinates, which I believe is just an x,y coordinate system of the 2d screen.
The avatar position is locked, and the button to cast the rod is located at an absolute position of (957,748).
The camera position is locked with no rotation or movement.
I believe that the angle the rod is cast at is likely a 1:1 opposite of where it is pulled to. Ex: if the rod was pulled to 225 degrees it would cast at 45 degrees. I am not 100% sure, but I think that the strength is linear. I used linear regression partially because I was not sure about this. There is no altitude difference/slope/wind that affects the cast. The only affecting factor of landing position is where the arrow is dragged to. The arrow will not drag past the 180/360 degree position sideways (relative to cast button) and will simply lock the cast angle in the x direction if it is held there.
The x-y data was collected with a simple program to move the mouse to the same position (957,748) and drag the arrow to cast the rod with different drag strengths/positions to create some kind of line of best fit for a general formula for casting the rod. The triang_x and y functions included are what the x and y coordinates were run through respectively to triangulate the ending drag coordinate for the arrow. This does not work very well because matching the x-to-x and y-to-y doesn't account for x and y data in each formula, just x-to-x etc.
Left column is fishing spot coordinates, right column is where arrow is dragged to to hit the fish spot.
(1133,359) to (890,890)
(858,334) to (886, 900)
(755,579) to (1012,811)
(1013,255) to (933,934)
(1166,469) to (885,855)
(1344,654) to (855,794)
(804,260) to (1024,939)
(1288,287) to (822,918)
(624,422) to (1075,869)
(981,460) to (949,851)
(944,203) to (963,957)
(829,367) to (1005,887)
(1129,259) to (885,932)
(773,219) to (1036,949)
(1052,314) to (919,908)
(958,662) to (955,782)
(1448,361) to (775,906)
(1566,492) to (751,837)
(1275,703) to (859,764)
(1210,280) to (852,926)
(668,513) to (1050,836)
(830,243) to (1011,939)
(688,654) to (1022,792)
(635,437) to (1072,864)
(911,252) to (976,935)
(1499,542) to (785,825)
(793,452) to (1017,860)
(1309,354) to (824,891)
(1383,522) to (817,838)
(1262,712) to (867,758)
(927,225) to (980,983)
(644,360) to (1097,919)
(1307,648) to (862,798)
(1321,296) to (812,913)
(798,212) to (1026,952)
(1315,460) to (836,854)
(700,597) to (1028,809)
(868,573) to (981,811)
(1561,497) to (758,838)
(1172,588) to (896,816)
Shows bot actions taken within function and how formula is used.
coeffs_x = np.float64([
-7.9517089428836911e 005,
4.1678460255861210e 003,
-7.5075555590709371e 000,
4.2001528427460097e-003,
2.3767929866943760e-006,
-4.7841176483548307e-009,
6.1781765539212100e-012,
-5.2769581174002655e-015,
-4.3548777375857698e-019,
2.5342561455214514e-021,
-1.4853535063513160e-024,
1.5268121610772846e-027,
-2.9667978919426497e-031,
-9.5670287721717018e-035,
-2.0270490020866057e-037,
-2.8248895597371365e-040,
-4.6436110892973750e-044,
6.7719507722602512e-047,
7.1944028726480678e-050,
1.2976299392064562e-052,
7.3188205383162127e-056,
-6.3972284918241943e-059,
-4.1991571617797430e-062,
2.5577340340980386e-066,
-4.3382682133956009e-068,
1.5534384486024757e-071,
5.1736875087411699e-075,
7.8137258396620031e-078,
2.6423817496804479e-081,
2.5418438527686641e-084,
-2.8489136942892384e-087,
-2.3969101111450846e-091,
-3.3499890707855620e-094,
-1.4462592756075361e-096,
6.8375394909274851e-100,
-2.4083095685910846e-103,
7.0453288171977301e-106,
-2.8342463921987051e-109
])
triang_x = np.polynomial.Polynomial(coeffs_x)
coeffs_y = np.float64([
2.6215449742035207e 005,
-5.7778572049616614e 003,
5.1995066291482431e 001,
-2.3696608508824663e-001,
5.2377319234985116e-004,
-2.5063316505492962e-007,
-9.2022083686040928e-010,
3.8639053124052189e-013,
2.7895763914453325e-015,
7.3703786336356152e-019,
-1.3411964395287408e-020,
1.5532055573746500e-023,
-6.9719956967963252e-027,
1.9573598517734802e-029,
-3.3847482160483597e-032,
-5.5368209294319872e-035,
7.1463648457003723e-038,
4.6713369979545088e-040,
-7.5070219026265008e-043,
-4.5089676791698693e-047,
-3.2970870269153785e-049,
1.6283636917056585e-051,
-1.4312555782661719e-054,
7.8463441723355399e-058,
1.9439588820918080e-060,
2.1292310369635749e-063,
-1.4191866473449773e-065,
-2.1353539347524828e-070,
2.5876946863828411e-071,
-1.6182477348921458e-074
])
triang_y = np.polynomial.Polynomial(coeffs_y)
def bot_actions(rectangles):
if len(rectangles) > 0:
# finds x,y location of fish
targets = vision.get_click_points(rectangles)
target = wincap.get_screen_position(targets[0])
target_pos = (target[0],target[1])
x1=target[0]
y1 = target[1]
print('1st target at x:{} y:{}'.format(x1,y1))
sleep(2)
# sleeps 2 seconds and finds location again to see if fish stopped moving
targets = vision.get_click_points(rectangles)
target = wincap.get_screen_position(targets[0])
target_pos = (target[0],target[1])
x2 = target[0]
y2 = target[1]
print('2nd target at x:{} y:{}'.format(x2,y2))
xres = abs(x1-x2)
yres = abs(y1-y2)
# determines if stopped within tolerance
if xres yres <= 25:
print('Fish stopped')
# calculates arrow drag position
new_x = triang_x(x2)
new_y = triang_y(y2)
print(new_x,new_y)
# executes arrow drag
pyautogui.moveTo(957,748)
pyautogui.click()
pyautogui.dragTo(new_x,new_y,.5)
sleep(3)
#checks dialogue box to see if fish caught
pixelRGB = ImageGrab.grab().getpixel((811,335))
print(pixelRGB)
if pixelRGB[0] == 255:
print('got em!')
pyautogui.moveTo(1149,480)
pyautogui.click()
elif pixelRGB[0] != 255:
print('didnt get em!')
CodePudding user response:
First you need to clarify few things:
the
xy
dataIs position of object you want to hit or position what you hit when used specific input data (which is missing in that case)?
In what coordinate system?what position is your avatar?
how is the view defined?
is it fully 3D with 6DOF or just fixed (no rotation or movement) relative to avatar?
what is the physics/logic of your rod casting
is it angle (one or two), strength?
Is the strength linear to distance?
Does throwing acount for altitude difference between avatar and target?
does ground elevation (slope) play a role?
Are there any other factors like wind, tupe of rod etc?
You shared the xy data but what against you want to correlate or make formula for it? it does not make sense you obviously forget to add something like each position was taken for what conditions?
I would solve this by (no further details before you clarify stuff above):
transform targets xy to player relative coordinate system aligned to ground
compute azimut angle (geometricaly)
simple
atan2(y,x)
will do but you need to take into account your coordinate system notations.compute elevation angle and strength (geometricaly)
simple balistic physics should apply however depends on the physics the game or whatever you write this for uses.
adjust for additional stuff
You know for example wind can slightly change your angle and strength
In case you have real physics and data you can do #3,#4 at the same time. See similar:
-
I converted your Cartesian points:
int ava_x=957,ava_y=748; // avatar int data[]= // drag(x0,y0) , target(x1,y1) { 1133,359,890,890, 858,334,886, 900, 755,579,1012,811, 1013,255,933,934, 1166,469,885,855, 1344,654,855,794, 804,260,1024,939, 1288,287,822,918, 624,422,1075,869, 981,460,949,851, 944,203,963,957, 829,367,1005,887, 1129,259,885,932, 773,219,1036,949, 1052,314,919,908, 958,662,955,782, 1448,361,775,906, 1566,492,751,837, 1275,703,859,764, 1210,280,852,926, 668,513,1050,836, 830,243,1011,939, 688,654,1022,792, 635,437,1072,864, 911,252,976,935, 1499,542,785,825, 793,452,1017,860, 1309,354,824,891, 1383,522,817,838, 1262,712,867,758, 927,225,980,983, 644,360,1097,919, 1307,648,862,798, 1321,296,812,913, 798,212,1026,952, 1315,460,836,854, 700,597,1028,809, 868,573,981,811, 1561,497,758,838, 1172,588,896,816, };
Into polar relative to
ava_x,ava_y
usingatan2
and 2D distance formula and simply print the angular difference 180deg and ratio between line sizes (that is the yellow texts in left of the screenshot) first is ordinal number then angle difference [deg] and then ratio between line lengths...as you can see the angle difference is
/-10.6deg
and length ratio is<2.5,3.6>
probably because of inaccuracy of OpenCV findings and some randomness for fishing rod castings from the game logic itself.As you can see polar coordinates are best for this. For starters you could do simply this:
// wanted target in polar (obtained by CV) x = target_x-ava_x; y = target_y-ava_y; a = atan2(y,x); l = sqrt((x*x) (y*y)); // aiming drag in polar a = 3.1415926535897932384626433832795; // =180 deg l *= 3.0; // "avg" ratio between line sizes // aiming drag in cartesian aim_x = ava_x l*cos(a); aim_y = ava_y l*sin(a);
Now to improve precision you could measure the dependency or line ratio and line size (it might be not linear) , also the angular difference might be bigger for bigger lines ...
Also note that second cast (ordinal 2) is probably a bug (wrongly detected x,y by CV) if you render the 2 lines you will see they do not match so you should not account that and throw them away from dataset.
Also note that I code in C so my goniometrics use radians (not sure if true for python if not you need to convert to degrees) also equations might need some additional tweaking for your coordinate systems (negate y?)