(responding out of order for narrative purposes)
[violet] wrote:Are you saying it's normal behavior for a scriptless R/Der to F5 the reports page ten times per second? I struggle to see this as it means they have 100ms to send the request to the server, have it processed, get it back, and then, with their human brain, make a decision about it. Where I am in the world, it takes 250ms - 800ms just to get a response from the reports page, even with 'template-overall=none', so if I'm pressing F5 ten times a second, all I'm doing is canceling every previous request before it completes -- I never see any data at all. In an ideal scenario where I'm physically located very close to the server and it's not congested, I might get a response as fast as 50ms, after which my browser can begin to paint, so it's conceivably possible -- but I have to read & process the data I get back in 1/20th of a second.
The short answer is yes, but it's more complicated than that. A few things come together here.
First, there is what we call "cadence", which is a rhythm of refreshes that is appropriate for each individual's network speed. This is to avoid what you mentioned, i.e. cancelling the previous request before it completes, as that ultimately makes things slower. My speeds to the site are comparable to yours, so in my case, my cadence is usually in about the 300-400ms range — slow enough to not cancel a previous request, but fast enough to be able to respond fairly quickly. The 800ms spikes
usually stay away with good cadence, since those spikes are related to the Keep-Alive issue we had talked about as early as 2017.
Second, the cadence will become muscle memory and will become, basically, an automated autonomous process of your F5ing hand, while the rest concentrates on parsing the output and responding. It's not unusual for somebody chasing manually (or even with Breeze++) to actually refresh an additional 1-2 times before reacting to the output because of this, particularly if they have latencies faster than mine and/or have slower, older brains like mine.
Third, the human reaction time is usually the limiting factor here, so when on a high speed chase, those chasing will not even bother parsing the output, but rather just trust that they have dossier'd the correct nations beforehand and will move when
any output appears.
So, to answer your question: 10 times a second is a lot, but not infeasible for somebody in a suitable geographical location and a young sharp brain.
[violet] wrote:We actually identified that very early -- at first I deemed the traffic illegal based on the lack of UserAgent, but Elu pointed out that he'd
advised people that the URL could be an acceptable substitute under certain conditions. We then talked about updating the Script Rules to reflect this. They still are not updated, though, because there is a lot of discussion about where we're going with script rules in general. I didn't check whether Reliant met Elu's conditions to qualify for not using a UserAgent, but I assumed it did, since there's not much reason to avoid it otherwise.
Suggested amendment to OSRS Script Rules Part 2 for you to copy/paste, change, or ignore at your leisure:
2. Identify your script via User Agent
You must identify your script with every request it makes to the site. This identification must at the very least include the name of your script, the version of your script, and a means to contact you, the script author. The contact information could be a nation name, an email address, or your website's URL, and allows us to contact you if something goes wrong, and give you a chance to fix it. In nearly all cases, the identification should occur with the User-Agent HTTP header. If that is not possible due to technical limitations, it's acceptable to instead pass the identification via the URL Parameter "script".
If the script is performing an action based on direct user input (see above), it should include a URL Parameter "userclick" with a UTC timestamp in milliseconds containing the time the user clicked the button.
As an example of this rule, in a Greasemonkey/Tampermonkey script responding to user input, one could do the following to any URLs they issue:
- Code: Select all
// append script name, version, and author
url = url + "/script=exampleScript_"+GM_info.script.version + "_by_Testlandia";
// append user click timestamp
url = url + "/userclick="+Date.UTC();
Flanderlion wrote:Are the QoL improvements for individuals using their own scripts worth the admin time to police it? Like, sometimes something is a good idea (allowing scripts on the HTML to improve QoL stuff) turns into more work than it's worth. Would it really be the end of the world for a blanket ban, and enforce it only when it goes into mod/admin attention.
I think HTML scripts & bots exist for two reasons:
(1) The author found it easier than learning how to interact with the API; or
(2) The bot wants to do things the API doesn't allow or support.
-snip-[/quote]
I would add two more here, from my perspective at least, that couldn't be easily sorted into category two.
The first is accessibility, and I don't think I can explain the motivation as well as Shizensky did in
this post, starting at the fourth paragraph from the bottom. Others in this thread have chimed in on a similar issue, but it might help to also have a visual example, so I've prepared one. Left is Breeze's augmented reports page, right is stock Reports page:
The second is QoL improvements to the existing interface. For example, the old Telescope augmented the site UI to include buttons that would allow endorsing or moving directly from any nation or region link. I know that raiders have scripts that do the same with ejection buttons. TheI'm less familiar with other areas of the game (i.e. cards or such), but I'm told that similar kinds of things appear there as well. These aren't really accessibility features or even necessary to play the game, but they make things much easier and convenient. I'm not sure how such UI augmentation could be possible through the API.