Lol looks between this, the tracer hitboxes, and T. Akiba’s throw range advantages (which shows roundhouse with 1 more pixel of reach) that my recent usage of forward RBG is counterproductive. Strong SPD having more frame advantage is also interesting. I guess the frame advantage numbers for the mashing holds is too small to safe jump even Balrog.
Hmm… must have missed it. I definitely recall getting different numbers than Sirlin’s quoted for the Akuma fireballs though. That ended up sending me on a tangent to get information about turbo though. <shrug /> Well, it is what it is.
Personally I try to concentrate on picking things that hit, and then adjusting my follow-ups to fit, rather than the other way around.
1 frame of difference on the SPD should basically be a non-issue since you’ll often be SPD’ing people into the stage wall a lot if you can land it (which cuts the frame advantage) , and if you’re not, the link to the Jab Green Hand is also unlikely to be that precise.
Yeah, there’s probably no safe-jumps there, especially considering that Balrog tends to reset a little faster than Ryu does.
You know how I mentioned that your frame counts for ground normals were off by a frame compared to T. Akiba’s Rufus? Well I came to the conclusion that they are indeed missing a startup frame. I used punch lariat and Sim’s cr. away mk (which has 1 startup frame according to your pics) from point blank range the same time using the pause trick and they punch lariat always hits first. I also tried spd vs Chun’s flip kick and she gets grabbed everytime despite the pics showing her airborne instantly.
Ground normals probably have the same stance as neutral/crouching for one frame in the startup. This is probably easy to test by using a character with a different walking hitbox than neutral (like Chun or Blanka) and using a normal with a noticeable startup animation while walking. If I’m right it should go directly from the walking hitbox to the neutral stance hitbox for 1 frame before going to the attack animation.
One thing that you may be running into is that the input is not resolved at the same time the screen is drawn. Most games follow a loop of check input, move objects, draw scene, repeat. If you press a button right before it checks for input, then you won’t get any extra latency, but, if you press it right after, you get almost an entire frame where the game doesn’t realize you’ve pressed it yet. Thus, the input latency of all games varies by at least one frame. This is probably also the reason simultaneous throws wind up being 50/50. If they both press the throw button at the same time, whoever has their input checked first will be the one to get the throw.
I’m not sure what the pause trick is. It may be that normals have a single frame of ‘unanimated’ startup. I did some testing of Ryu’s moves this morning - here’s a sample result:
WARNING - lots of images http://www.pedantic.org/~nate/HDR/misc/ryucrstrong.html
The pause trick I mentioned is holding down the attack buttons after pressing start to ensure they happen as soon as I unpause the game (at the same time). Like for Lariat vs Sim db mk I press start, hold mk and mp with Zangief, hold db mk with Sim, and unpause the game. The tricky part is pressing x or o on the controller that paused the game will unpause the game, so you have to press those buttons last. For SPD I just do the motion before pausing the game and then hold punch during the pause.
You could probably test the unanimated startup frame theory with the test I mentioned in my last post. It could either be unanimated (looks the same as whatever you were doing before) or just the look same as neutral/crouching stance.
This is going to sound weird, but here goes. This one frame difference could be caused by interlacing. As you may know, SD capture is 60 fields per second, that is opposed to being progressive. It shows one half of the image in turn, odd fields then even fields. Because this is ugly on progressive displays, we usually deinterlace such footage. Now deinterlacing takes the even fields from one frame and the odd fields from another in order to form one complete image. Depending on the filter used (or maybe even the TV system, as you can have top field first or bottom field first), you may get a different image at a certain point in time. For example if one filter mixes frame 1002 and 1003 to make a complete image, but another filter mixes frame 1003 and 1004 to make the image for that particular point in time, it’s possible that the actual frame will be different.
The point I’m trying to make is that depending on how interlaced video is processed, it’s possible to have inaccuracies of a frame. Definitive testing should be done with HD capture (60fps). This used to be a problem for me years ago when I was editing video. You could start putting something together in premiere, but if you changed your avisynth filters or script, your scene changes could be off and mess up all your cuts.
I may be wrong, but I’m pretty sure this could be why.
Please don’t post piles of images directly into the thread like that. It’s extra annoying for people with low bandwidth connections.
NTSC is always lower-field first, so I’m not sure how that could cause a problem.
In fact, If you look closely, you’ll see that my images are unhobbed deinterlaced fields.
That’s fine then. I was just clutching at straws because I didn’t appreciate how much you may or may not have known about video systems. You obviously have a good grasp of it, so there’s no need for me to go into depth about these kind of things. Just wondering, does your software/card allow capture at a full 59.94fps, or is it like most stuff and captures at 29.97 interlaced?
I do remember there being some slight differences between NTSC-J and NTSC as used in America, but I just looked it up and it’s only the black levels. At first I was wondering if maybe there was a difference in field order (I think there may be between PAL and NTSC, but it seems not between NTSC variants).
Can I ask what connection you are using? I use RGB scart by the way. That was the main reason I got a DVD recorder, because RGB scart is pretty awesome as far as SD goes, but all the SD capture cards I could find only offered S-Video as the best.
Ah, I didn’t think of NTSC/NTSC-J differences. When I started capturing stuff I was concerned about the upper/lower field order so I was manually checking, but after a few dozen captures and a little research it’s not a concern for me anymore.
I have a Canopus ADVC-100 which does Analog video - DV conversion feeding into a computer by firewire cable. By the time I get to it, it’s at 29.97 frames per second interlaced. It’s a couple of years old, and getting more flaky by the day, but nobody else was really bothering with HDR frame images so I figured I might as well.