NATION

PASSWORD

Script: "Reliant" + HTML Script Legality Discussion

Bug reports, general help, ideas for improvements, and questions about how things are meant to work.

Advertisement

Remove ads

User avatar
Wallenburg
Postmaster of the Fleet
 
Posts: 22873
Founded: Jan 30, 2015
Democratic Socialists

Postby Wallenburg » Tue Jun 14, 2022 3:13 pm

I think discussion on the matter of HTML script legality would benefit immensely if it were clear whether admin has already decided to kill the R/D game or is merely entertaining the idea. Is any amount of discussion here liable to actually convince admin against such a drastic course of action over such a nothingburger, or are we discussing ways to conduct damage control once a ban is implemented?
While she had no regrets about throwing the lever to douse her husband's mistress in molten gold, Blanche did feel a pang of conscience for the innocent bystanders whose proximity had caused them to suffer gilt by association.

King of Snark, Real Piece of Work, Metabolizer of Oxygen, Old Man from The East Pacific, by the Malevolence of Her Infinite Terribleness Catherine Gratwick the Sole and True Claimant to the Bears Armed Vacancy, Protector of the Realm

User avatar
Reploid Productions
Director of Moderation
 
Posts: 30512
Founded: Antiquity
Democratic Socialists

Postby Reploid Productions » Tue Jun 14, 2022 3:59 pm

Wallenburg wrote:I think discussion on the matter of HTML script legality would benefit immensely if it were clear whether admin has already decided to kill the R/D game or is merely entertaining the idea. Is any amount of discussion here liable to actually convince admin against such a drastic course of action over such a nothingburger, or are we discussing ways to conduct damage control once a ban is implemented?

If admin ultimately decides to go "no more HTML scripts", there will have to be a lengthy discussion among the team and with players on the subject prior to any implementation to see where we can accommodate via API or other alterations to reduce the need for such scripts in the first place. And the actual implementation would most likely have to include a sizable grace period as well.

Ultimately, anything that removes the need for moderation judgement calls to be brought into it has got my vote. Scripts and trying to enforce rules regarding them is the modern-day equivalent of the pre-Influence griefing rules; mods having to spend outrageous amounts of time (and trying to get admins to spend outrageous amounts of time) to try and sort out situations where we don't have access to all the information needed about situations where something does need to be done, generating massive headaches for moderation, for raiders, for defenders, and for natives. Like the pre-Influence rules were a mere band-aid for early era R/D, the current scripting rules have proven to be unsustainable band-aid from a moderation standpoint for modern R/D. I suspect we'll all be much happier if we never have to puzzle through another Predator or another Reliant or whatever ever again and [v] could just go "Somebody's hitting the HTML site with a script- BLOCKED!" instead of... well, all this.
Forum mod since May 8, 2003 -- Game mod since May 19, 2003 -- Nation turned 20 on March 23, 2023!
Sunset's DoGA FAQ - For those using DoGA to make their NS military and such.
One Stop Rules Shop -- Reppy's Sig Workshop -- Getting Help Page
[violet] wrote:Maybe we could power our new search engine from the sexual tension between you two.
Char Aznable/Giant Meteor 2024! - Forcing humanity to move into space and progress whether we goddamn want to or not!

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Tue Jun 14, 2022 7:16 pm

Flanderlion wrote:Are the QoL improvements for individuals using their own scripts worth the admin time to police it? Like, sometimes something is a good idea (allowing scripts on the HTML to improve QoL stuff) turns into more work than it's worth. Would it really be the end of the world for a blanket ban, and enforce it only when it goes into mod/admin attention.

I think HTML scripts & bots exist for two reasons:
(1) The author found it easier than learning how to interact with the API; or
(2) The bot wants to do things the API doesn't allow or support.

There seem to be plenty of bots in both categories. A migration would mean:
(1) Bot authors have to go to the trouble of porting their code to use the API. This isn't especially challenging, but not all bots are actively maintained, so some may never be ported.
(2) As we add API endpoints to support things bots want to do, we have to answer a series of questions around what should be permitted. Should they be allowed to auto-endorse, and if so, how quickly? And so on. Each may be controversial, and clearly many people fear the answers would be too restrictive.

Roavin wrote:Once again, [v], thank you so much for coming into this thread and giving us this information - it might not seem like it this way for some at first glance, but as the primary tech dude within this thread, your post answers a lot of extant questions that we were having, and (for me at least) vindicates having put the effort into all of this.

Glad to hear it. You can thank Sedge, as I was hoping to stay out of this.

Roavin wrote:There is an inconsistency in documentation - Elu's suggested identification via URL-Parameters does not include a mention of contact-info, but that example is what Reliant used as an example to identify itself. With your post, we now know that this was incorrect, as that lacks contact info, as mentioned by OSRS Script Rules. We'll fix that in Reliant immediately, but it might be worth either amending Elu's post or, even better, amending that part of the OSRS script rules to include a "correct" example. (@NS Staff generally: what's the best way to report this, beyond writing it here?)

We actually identified that very early -- at first I deemed the traffic illegal based on the lack of UserAgent, but Elu pointed out that he'd advised people that the URL could be an acceptable substitute under certain conditions. We then talked about updating the Script Rules to reflect this. They still are not updated, though, because there is a lot of discussion about where we're going with script rules in general. I didn't check whether Reliant met Elu's conditions to qualify for not using a UserAgent, but I assumed it did, since there's not much reason to avoid it otherwise.

Roavin wrote:On the bots being banned: You've implied several times that you do this with some frequency, but I haven't witnessed it in the circles I frequent; even with Storm, the unnamed TG tool, and now Reliant, no ban took place but instead other measures; and I know of at least a magnitude more than 3 tools doing a variety of things that have not gotten blocked or banned or whatnot. Is it possible that you have just a bit of reinforcement bias, since the scripts you deal with are the bad ones and the good ones just do their thing as they should? :P

It's certainly true that I generally deal with scripts that are flagrant rules violations. R/D scripts, in my experience, are generally written by people who have a good understanding of the HTML Script Rules, and try to comply (or, at least, not break them too obviously). Many of the bots I block are clearly from people who don't realize those rules even exist or don't care.

Roavin wrote:
[violet] wrote:Incidentally the script spams (or spammed) requests like this at the rate of ~10 reqs/second -- I don't know what it's doing, but it looks to be requesting the exact same data over and over, which is behavior I usually interpret as a broken bot and block via CloudFlare.
Code: Select all
xxx.xxx.xxx.xxx - - [06/Feb/2022:22:25:39 -0800] "GET /template-overall=none/page=reports/script=reliant_1.3/userclick=1644215140121 HTTP/1.1" 200 772 "https://www.nationstates.net/template-overall=none/page=blank/reliant=main" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36"


That's normal behavior; the difference here is that usually it's R/Ders just manually F5 the normal reports page, and this is Reliant doing it through a keybind and parsing the result for presentation to the user.

Are you saying it's normal behavior for a scriptless R/Der to F5 the reports page ten times per second? I struggle to see this as it means they have 100ms to send the request to the server, have it processed, get it back, and then, with their human brain, make a decision about it. Where I am in the world, it takes 250ms - 800ms just to get a response from the reports page, even with 'template-overall=none', so if I'm pressing F5 ten times a second, all I'm doing is canceling every previous request before it completes -- I never see any data at all. In an ideal scenario where I'm physically located very close to the server and it's not congested, I might get a response as fast as 50ms, after which my browser can begin to paint, so it's conceivably possible -- but I have to read & process the data I get back in 1/20th of a second.

User avatar
United Calanworie
Technical Moderator
 
Posts: 3839
Founded: Dec 12, 2018
Democratic Socialists

Postby United Calanworie » Tue Jun 14, 2022 7:43 pm

[violet] wrote:
Roavin wrote:
That's normal behavior; the difference here is that usually it's R/Ders just manually F5 the normal reports page, and this is Reliant doing it through a keybind and parsing the result for presentation to the user.

Are you saying it's normal behavior for a scriptless R/Der to F5 the reports page ten times per second? I struggle to see this as it means they have 100ms to send the request to the server, have it processed, get it back, and then, with their human brain, make a decision about it. Where I am in the world, it takes 250ms - 800ms just to get a response from the reports page, even with 'template-overall=none', so if I'm pressing F5 ten times a second, all I'm doing is canceling every previous request before it completes -- I never see any data at all. In an ideal scenario where I'm physically located very close to the server and it's not congested, I might get a response as fast as 50ms, after which my browser can begin to paint, so it's conceivably possible -- but I have to read & process the data I get back in 1/20th of a second.

I personally have a fast connection to the servers and can typically load the reports page in ~70-80ms, which means that at that point it's down to my human reaction time in order to chase off of that. I use Breeze++ so it's just keybindings (N for refresh instead of F5 being the big one that comes to mind here) and not any AJAX calls being done in the background.
Trans rights are human rights.
||||||||||||||||||||
Discord: Aav#7546 @queerlyfe
She/Her/Hers
My telegrams are not for Moderation enquiries, those belong in a GHR. Feel free to reach out if you want to just chat.

User avatar
Esfalsa
Spokesperson
 
Posts: 132
Founded: Aug 07, 2015
Civil Rights Lovefest

Postby Esfalsa » Tue Jun 14, 2022 8:03 pm

An approximately 50ms reaction time can be hard to believe but is also not necessary. Refreshing the reports page as quickly as possible is an attempt to see move happenings as soon as possible after they are generated. It doesn't take much brainpower to refresh a page repeatedly, so sometimes I might see a move happening but keep refreshing until my reaction time catches up.

User avatar
Refuge Isle
Technical Moderator
 
Posts: 1900
Founded: Dec 14, 2018
Left-wing Utopia

Postby Refuge Isle » Tue Jun 14, 2022 8:15 pm

Similarly, living close to a cloudflare server, I have a pageload of 30-60ms.

I don't know how familiar you are with chasing, but we dossier raider nations and wait for them to act, refreshing madly until they move so that we can provide that rapid reaction time to raid moves that may start and end within a second. My human brain does not need to analyse vast data with every refresh, for if nothing has populated on the reports page, there has been no raider activity and no additional responses are required by myself.

Generally one does *not* even make a decision on a move happening except to follow the raiders with keybound move commands. Consequently, we end up chasing into targets that we have to analyse after the fact, regarding whether or not the move was legitimate, a decoy, or a region we want to avoid defending entirely. It's better to backload that mental math than do it when it could cost you a defence or results in the next one to fourteen updates of liberation organisation and planning.

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Tue Jun 14, 2022 11:36 pm

Thanks for the above. Somewhere around 99% of players live at least 40ms away from our server, which stacks on top of page generation time, so 10 requests per second still strikes me as very hard to achieve by someone without a script to prevent F5s from becoming simultaneous and stomping on each other. But I understand the point about refreshing quickly and allowing your slow human brain to process data from several refreshes ago.

Just so I understand, with these requests for the reports page, Reliant is simply acting as a keybind? It's not doing anything special with the data returned?

User avatar
Roavin
Admin
 
Posts: 1778
Founded: Apr 07, 2016
Democratic Socialists

Postby Roavin » Tue Jun 14, 2022 11:41 pm

(responding out of order for narrative purposes)

[violet] wrote:Are you saying it's normal behavior for a scriptless R/Der to F5 the reports page ten times per second? I struggle to see this as it means they have 100ms to send the request to the server, have it processed, get it back, and then, with their human brain, make a decision about it. Where I am in the world, it takes 250ms - 800ms just to get a response from the reports page, even with 'template-overall=none', so if I'm pressing F5 ten times a second, all I'm doing is canceling every previous request before it completes -- I never see any data at all. In an ideal scenario where I'm physically located very close to the server and it's not congested, I might get a response as fast as 50ms, after which my browser can begin to paint, so it's conceivably possible -- but I have to read & process the data I get back in 1/20th of a second.


The short answer is yes, but it's more complicated than that. A few things come together here.

First, there is what we call "cadence", which is a rhythm of refreshes that is appropriate for each individual's network speed. This is to avoid what you mentioned, i.e. cancelling the previous request before it completes, as that ultimately makes things slower. My speeds to the site are comparable to yours, so in my case, my cadence is usually in about the 300-400ms range — slow enough to not cancel a previous request, but fast enough to be able to respond fairly quickly. The 800ms spikes usually stay away with good cadence, since those spikes are related to the Keep-Alive issue we had talked about as early as 2017.

Second, the cadence will become muscle memory and will become, basically, an automated autonomous process of your F5ing hand, while the rest concentrates on parsing the output and responding. It's not unusual for somebody chasing manually (or even with Breeze++) to actually refresh an additional 1-2 times before reacting to the output because of this, particularly if they have latencies faster than mine and/or have slower, older brains like mine.

Third, the human reaction time is usually the limiting factor here, so when on a high speed chase, those chasing will not even bother parsing the output, but rather just trust that they have dossier'd the correct nations beforehand and will move when any output appears.

So, to answer your question: 10 times a second is a lot, but not infeasible for somebody in a suitable geographical location and a young sharp brain.

[violet] wrote:We actually identified that very early -- at first I deemed the traffic illegal based on the lack of UserAgent, but Elu pointed out that he'd advised people that the URL could be an acceptable substitute under certain conditions. We then talked about updating the Script Rules to reflect this. They still are not updated, though, because there is a lot of discussion about where we're going with script rules in general. I didn't check whether Reliant met Elu's conditions to qualify for not using a UserAgent, but I assumed it did, since there's not much reason to avoid it otherwise.


Suggested amendment to OSRS Script Rules Part 2 for you to copy/paste, change, or ignore at your leisure:

2. Identify your script via User Agent

You must identify your script with every request it makes to the site. This identification must at the very least include the name of your script, the version of your script, and a means to contact you, the script author. The contact information could be a nation name, an email address, or your website's URL, and allows us to contact you if something goes wrong, and give you a chance to fix it. In nearly all cases, the identification should occur with the User-Agent HTTP header. If that is not possible due to technical limitations, it's acceptable to instead pass the identification via the URL Parameter "script".

If the script is performing an action based on direct user input (see above), it should include a URL Parameter "userclick" with a UTC timestamp in milliseconds containing the time the user clicked the button.

As an example of this rule, in a Greasemonkey/Tampermonkey script responding to user input, one could do the following to any URLs they issue:
Code: Select all
// append script name, version, and author
url = url + "/script=exampleScript_"+GM_info.script.version + "_by_Testlandia";
// append user click timestamp
url = url + "/userclick="+Date.UTC();



Flanderlion wrote:Are the QoL improvements for individuals using their own scripts worth the admin time to police it? Like, sometimes something is a good idea (allowing scripts on the HTML to improve QoL stuff) turns into more work than it's worth. Would it really be the end of the world for a blanket ban, and enforce it only when it goes into mod/admin attention.

I think HTML scripts & bots exist for two reasons:
(1) The author found it easier than learning how to interact with the API; or
(2) The bot wants to do things the API doesn't allow or support.
-snip-[/quote]

I would add two more here, from my perspective at least, that couldn't be easily sorted into category two.

The first is accessibility, and I don't think I can explain the motivation as well as Shizensky did in this post, starting at the fourth paragraph from the bottom. Others in this thread have chimed in on a similar issue, but it might help to also have a visual example, so I've prepared one. Left is Breeze's augmented reports page, right is stock Reports page:

Image


The second is QoL improvements to the existing interface. For example, the old Telescope augmented the site UI to include buttons that would allow endorsing or moving directly from any nation or region link. I know that raiders have scripts that do the same with ejection buttons. TheI'm less familiar with other areas of the game (i.e. cards or such), but I'm told that similar kinds of things appear there as well. These aren't really accessibility features or even necessary to play the game, but they make things much easier and convenient. I'm not sure how such UI augmentation could be possible through the API.
Helpful Resources: One Stop Rules Shop | API documentation | NS Coders Discord
About me: Longest serving Prime Minister in TSP | Former First Warden of TGW | aka Curious Observations

Feel free to TG me, but not about moderation matters.

User avatar
Roavin
Admin
 
Posts: 1778
Founded: Apr 07, 2016
Democratic Socialists

Postby Roavin » Tue Jun 14, 2022 11:47 pm

(sorry, [v], you responded as I was writing my previous post :P )

[violet] wrote:Thanks for the above. Somewhere around 99% of players live at least 40ms away from our server, which stacks on top of page generation time, so 10 requests per second still strikes me as very hard to achieve by someone without a script to prevent F5s from becoming simultaneous and stomping on each other. But I understand the point about refreshing quickly and allowing your slow human brain to process data from several refreshes ago.

Just so I understand, with these requests for the reports page, Reliant is simply acting as a keybind? It's not doing anything special with the data returned?


Reliant in particular does a bit more to parse and massage the data, but once again only in response to user input and as a convenience function. Pretty much all other tools used historically for these purposes (be they Breeze, Telescope, Storm, Koru, whatnot) don't do anything smart and just show it to the user (possibly stylized).

The main advantage of Reliant and later versions of Telescope are actually not requiring the user to keep cadence anymore, since as a consequence of correct simultaneity handling, the user can't refresh "too early" anymore to cancel a request. But "0s chases" (meaning following within the same second as per the happenings timestamp) is easily and frequently done by players such as Luca or GK using only Breeze's keybinds.
Helpful Resources: One Stop Rules Shop | API documentation | NS Coders Discord
About me: Longest serving Prime Minister in TSP | Former First Warden of TGW | aka Curious Observations

Feel free to TG me, but not about moderation matters.

User avatar
Trotterdam
Postmaster-General
 
Posts: 10545
Founded: Jan 12, 2012
Left-Leaning College State

Postby Trotterdam » Wed Jun 15, 2022 12:26 am

Roavin wrote:Left is Breeze's augmented reports page, right is stock Reports page:
It looks like all of the differences except for the "page load time" addition could be handled using a custom CSS sheet, which isn't even a script in any sense the word is normally used?

User avatar
Roavin
Admin
 
Posts: 1778
Founded: Apr 07, 2016
Democratic Socialists

Postby Roavin » Wed Jun 15, 2022 12:51 am

Trotterdam wrote:
Roavin wrote:Left is Breeze's augmented reports page, right is stock Reports page:
It looks like all of the differences except for the "page load time" addition could be handled using a custom CSS sheet, which isn't even a script in any sense the word is normally used?


Yes, in fact Breeze handles this with CSS.
Helpful Resources: One Stop Rules Shop | API documentation | NS Coders Discord
About me: Longest serving Prime Minister in TSP | Former First Warden of TGW | aka Curious Observations

Feel free to TG me, but not about moderation matters.

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Wed Jun 15, 2022 12:55 am

Roavin wrote:
[violet] wrote:Are you saying it's normal behavior for a scriptless R/Der to F5 the reports page ten times per second?

The short answer is yes

Thanks, and I understand your explanations, but I'm having trouble reconciling them with the conclusion. It seems to me that due to the reasons we've discussed, there are only limited circumstances in which a script-less user would refresh 10 times per second: in particular, they have to live next door to the origin server, and have excellent cadence and lightning reflexes. Which I don't think is normal. Whereas there are a couple of reasons why that behavior would be normal for Reliant users: no need to worry about cadence, and benefiting from whatever additional processing Reliant provides that you allude to in your second post.

In the server logs, I see Reliant sends plenty of very high-velocity page requests, whereas non-Reliant traffic to the same pages tends to be much slower. Now this might be explained if we accept that people using Reliant are precisely those who would be fast-clicking without it anyway. But that strikes me as a bit of a stretch -- I accept it to a degree, but I don't know if I can at 10reqs/s. I want to make sure I don't misunderstand -- is that actually your assertion, that Reliant has nothing to do with that behavior, as it's normal for R/Ders without scripts too?

Roavin wrote:I would add two more here, from my perspective at least, that couldn't be easily sorted into category two. The first is accessibility

I still don't see anything that couldn't be fetched from the API, though. The Breeze pic in particular is so different from the stock NS reports page that I really can't fathom what it gains by hitting the HTML site -- other than avoiding the API Happenings delay, of course. The content is slower to generate, liable to change format without warning, and Breeze throws away the HTML formatting it comes wrapped in... so what is the unavoidable need to fetch it from the HTML site, rather than consume XML or JSON from an API? Or, equally, when it's sending commands, why is it essential to send those commands to the HTML site, not the API?

If the answer is just "the API Happenings delay," then that's what I was referring to earlier, where it's a script that hits the HTML because it wants to do something the API doesn't currently permit. And, to be clear, I'm not saying I want to force all HTML scripts to eat that API delay. I'm saying we would have a lot of questions about what kind of delay there should be, if any, for this and other new API endpoints.

Roavin wrote:The second is QoL improvements to the existing interface. For example, the old Telescope augmented the site UI to include buttons that would allow endorsing or moving directly from any nation or region link. I know that raiders have scripts that do the same with ejection buttons. TheI'm less familiar with other areas of the game (i.e. cards or such), but I'm told that similar kinds of things appear there as well. These aren't really accessibility features or even necessary to play the game, but they make things much easier and convenient. I'm not sure how such UI augmentation could be possible through the API.

So these range from simple themes and stylesheets (no concern to admin at all), to tools that add ormove around buttons (generally ok), to tools that handle executing requests and processing the returned data (our area of concern). None of them bar the last are currently hitting the HTML site, right? So they're not the subject of discussion. And the last kind, which is much closer to a bot than a UI augmentation, is just as capable of querying the API as it is of querying the HTML site, as far as I can see.
Last edited by [violet] on Wed Jun 15, 2022 4:30 pm, edited 1 time in total.
Reason: Struck out "add [buttons]"

User avatar
Roavin
Admin
 
Posts: 1778
Founded: Apr 07, 2016
Democratic Socialists

Postby Roavin » Wed Jun 15, 2022 1:53 am

[violet] wrote:Thanks, and I understand your explanations, but I'm having trouble reconciling them with the conclusion. It seems to me that due to the reasons we've discussed, there are only limited circumstances in which a script-less user would refresh 10 times per second: in particular, they have to live next door to the origin server, and have excellent cadence and lightning reflexes. Which I don't think is normal. Whereas there are a couple of reasons why that behavior would be normal for Reliant users: no need to worry about cadence, and benefiting from whatever additional processing Reliant provides that you allude to in your second post.

In the server logs, I see Reliant sends plenty of very high-velocity page requests, whereas non-Reliant traffic to the same pages tends to be much slower. Now this might be explained if we accept that people using Reliant are precisely those who would be fast-clicking without it anyway.


That's certainly part of the equation, yes. Particularly Refuge Isle, Grea Kriopia, and Altmoras are among the very fastest even without Reliant, due to a combination of those factors you mentioned.

[violet] wrote:But that strikes me as a bit of a stretch -- I accept it to a degree, but I don't know if I can at 10reqs/s. I want to make sure I don't misunderstand -- is that actually your assertion, that Reliant has nothing to do with that behavior, as it's normal for R/Ders without scripts too?


I'd say you're right and Reliant does have somewhat an effect on this; not on the scale of 100% or more, but noticeable and measurable, so I measured it briefly (not highly scientifically, but enough for the sake of discussion). My request completion times right now are about 300-350ms (slightly slower than usual due to work things). With Breeze, my intuitive cadence normalized at about about 500-600ms, while spamclicking with Reliant gave me cadences of about 400-450ms. You should be able to see that in the server logs right now from this IP; the Reliant requests identify themselves, and the requests to non-template reports come from using Breeze. (NB: I used Breeze rather than just the raw reports page because Breeze reports includes the page load time, but otherwise the timing is equivalent - in either case, it's just window.location.reload(), be it triggered through F5 or through N)

So, yes, Reliant's approach does lead to more requests by roughly 20% due to its approach (which you've expressed annoyance at before, therefore I'm very cognizant that this probably won't stay this way forever).

I can't sustain 10cps, but my 35-year old fingers do let me easily sustain 7cps over 10 seconds (as measured here), and I can certainly imagine that younger and less broken fingers can do so faster. For me, 7cps vs 10cps wouldn't make much of a difference, but it would probably make more of a difference for those that are fast anyway, since their requests come through faster than they can click. If that makes a difference in terms of gameplay ... probably not, but I haven't run the numbers on this (nor do I have a great idea on how to do so without violating script rules by having a bot click at predetermined intervals), so maybe there is for the highest speeds.

[violet] wrote:
Roavin wrote:I would add two more here, from my perspective at least, that couldn't be easily sorted into category two. The first is accessibility

I still don't see anything that couldn't be fetched from the API, though. The Breeze pic in particular is so different from the stock NS reports page that I really can't fathom what it gains by hitting the HTML site -- other than avoiding the API Happenings delay, of course. The content is slower to generate, liable to change format without warning, and Breeze throws away the HTML formatting it comes wrapped in... so what is the unavoidable need to fetch it from the HTML site, rather than consume XML or JSON from an API? Or, equally, when it's sending commands, why is it essential to send those commands to the HTML site, not the API?


You're right, this could principally be done with the correct API endpoints and with Breeze as a standalone tool, but that's not achievable right now. Furthermore, it would mean Breeze becomes much bigger than it is; the reports page is the only page it restylizes, and otherwise it only provides keybinds for existing pages (i.e. pressing "M" on a region key presses the move button, pressing "E" on a nation page endorses it, etc.)

[violet] wrote:If the answer is just "the API Happenings delay," then that's what I was referring to earlier, where it's a script that hits the HTML because it wants to do something the API doesn't currently permit. And, to be clear, I'm not saying I want to force all HTML scripts to eat that API delay. I'm saying we would have a lot of questions about what kind of delay there should be, if any, for this and other new API endpoints.


Absolutely, plus there could be other ideas to efficiently transmit changes through the API (for example, a Websocket endpoint for activity log without requiring costly poll requests). This would make tooling better for sure, plus be better for servers, so I'm with you on this, that this could certainly be a desirable future for everybody involved.

[violet] wrote:So these range from simple themes and stylesheets (no concern to admin at all), to tools that add or move around buttons (generally ok), to tools that handle executing requests and processing the returned data (our area of concern). None of them bar the last are currently hitting the HTML site, right? So they're not the subject of discussion. And the last kind, which is much closer to a bot than a UI augmentation, is just as capable of querying the API as it is of querying the HTML site, as far as I can see.


How would you categorize Telescope's Endorse button, which is injected for every nlink and calls endorse.cgi when clicked by the user?
Helpful Resources: One Stop Rules Shop | API documentation | NS Coders Discord
About me: Longest serving Prime Minister in TSP | Former First Warden of TGW | aka Curious Observations

Feel free to TG me, but not about moderation matters.

User avatar
SherpDaWerp
Technical Moderator
 
Posts: 1897
Founded: Mar 02, 2016
Benevolent Dictatorship

Postby SherpDaWerp » Wed Jun 15, 2022 4:20 am

non-R/D'ing scripter here :wave:

Every time this comes up a bunch of cards scripters come out of the woodwork and whinge about how it'll affect all the QoL tools that are out there. From where I'm standing, reading this quote:
[violet] wrote:So these range from simple themes and stylesheets (no concern to admin at all), to tools that add or move around buttons (generally ok), to tools that handle executing requests and processing the returned data (our area of concern). None of them bar the last are currently hitting the HTML site, right? So they're not the subject of discussion. And the last kind, which is much closer to a bot than a UI augmentation, is just as capable of querying the API as it is of querying the HTML site, as far as I can see.

(and others) it seems like this is completely not your plan, and I've pointed this out to others a number of times now.

If there was a bit more specific wording employed (i.e. less "ban html scripts" and more "ban html request-making scripts") or some Official Wording about what's actually being considered, that would quell a lot of fears in the Cards community. It's a bit like putting the cart before the horse, I know, drafting a wording for a rule that you're not even convinced you need yet, but if there was a specific "admin do not intend to ban your QoL tool" post to point at, there'd be a few less Cards voices against this change.
Became an editor on 18/01/23 techie on 29/01/24

Rampant statistical speculation from before then is entirely unofficial

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Wed Jun 15, 2022 4:21 pm

Roavin wrote:Absolutely, plus there could be other ideas to efficiently transmit changes through the API (for example, a Websocket endpoint for activity log without requiring costly poll requests). This would make tooling better for sure, plus be better for servers, so I'm with you on this, that this could certainly be a desirable future for everybody involved.

I have already built a basic but functional event-based API for Happenings, which runs on Node and notifies listening clients via SSE. It would be approximately a million times better than the current situation of scripts polling for HTML multiple times per second.

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Wed Jun 15, 2022 4:28 pm

SherpDaWerp wrote:If there was a bit more specific wording employed (i.e. less "ban html scripts" and more "ban html request-making scripts") or some Official Wording about what's actually being considered, that would quell a lot of fears in the Cards community. It's a bit like putting the cart before the horse, I know, drafting a wording for a rule that you're not even convinced you need yet, but if there was a specific "admin do not intend to ban your QoL tool" post to point at, there'd be a few less Cards voices against this change.

Well maybe "ban" is the wrong word, since I'm actually talking about a migration, whereby scripts that currently send requests to https://www.nationstates.net would send them to api.nationstates.net instead. (It's a bit more involved than that, admittedly, but not a whole lot.) I don't want to ban any tools -- I want to change where they're sending traffic. And if they're not sending any traffic to the HTML site, they're not my concern.

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Wed Jun 15, 2022 4:39 pm

Roavin wrote:How would you categorize Telescope's Endorse button, which is injected for every nlink and calls endorse.cgi when clicked by the user?

I'm not familiar with that tool. If it's only adding dumb buttons that are found elsewhere on the page or site, it's maybe just restyling. If it's dynamically crafting URLs, or managing the sending and receiving of data, it's a script that could use the API.

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Wed Jun 15, 2022 6:52 pm

Ever-Wandering Souls wrote:It's unfortunately almost impossible to get even objectively simple API calls added from the requests thread for them, much less complex new ones that replace things done via HTML scripting at present, /much less/ things like an account/puppet manager/login switcher that admin has repeatedly already said is infeasible. =/

Just digging this out to reply, because I suspect it gets more closely at the genuine concern people have about being shifted to the API.

So for context the API serves over half a million requests every day, backing a wide range of third-party sites and tools -- and I think it generally delivers exactly what people want. I don't think we've ever allowed any major deficiency or problem to stand for too long, for example. But there are, of course, plenty of requests for more things to be added (likewise with the HTML site), and we don't implement them all, even some that are plainly good ideas.

I've added new APIs (such as the Trading Cards API) when new demand has emerged, but we have a chicken/egg situation with many parts of the site, because script authors have no motivation to use the API, so there's no demand for endpoints, so there's no interest in the API. This is particularly the case in R/D, where scripts work closely with users, rather than being fully automated, as there's no clear benefit to switching anyway. This can really only be fixed by biting the bullet and forcing a migration.

So yes, the API thread is full of people asking for things and not getting them from admin, and it's reasonable to worry that this will be the case for supposed new endpoints. But I think people should realize that the API is very mature in its existing offerings, and a lot of new requests are either fairly niche or else major features that are only worth coding if we can be confident that script authors will actually use them.

User avatar
Sandaoguo
Diplomat
 
Posts: 541
Founded: Apr 07, 2013
Left-Leaning College State

Postby Sandaoguo » Wed Jun 15, 2022 7:13 pm

[violet] wrote:So yes, the API thread is full of people asking for things and not getting them from admin, and it's reasonable to worry that this will be the case for supposed new endpoints. But I think people should realize that the API is very mature in its existing offerings, and a lot of new requests are either fairly niche or else major features that are only worth coding if we can be confident that script authors will actually use them.

Can you put in place a reasonable triage for new requests? Open up something like Github Issues for enhancement/feature requests, where they can tracked and assigned priority? Things that are simply not feasible to do through the API alone with the low rate limits should be high priority, for example. An endpoint that returns all endorsable nations in a region for X input would replace the endoswapping tools that rely on a complex combo of daily dumps, rate-limited API calls, or page scraping.* In the past, we've basically been told that status quo is fine, but it's that kind of necessary added complexity that turns people off from using the API. (We've also been told, though this was years ago so maybe it's changed, that complex endpoints aren't added because the NS server would have to make many fetch requests itself to build them!)

There are also tools that could be converted from being "hosted" by the client to centralized on a server, and ultimately use less resources, if end points were created for those scenarios. There's a separate issue about the very low rate limits I want to talk about in another post, since I lack a ton of free time at the moment.

Overall, I'm skeptical of the ability to convert all existing and future HTML scripts to API usage, either client or server based, because NationStates simply wasn't written API-first. But if you're willing to add in complex endpoints, essentially taking the complicated work that 3rd party devs have to do and doing it all in the NS backend, that would actually help API adoption. Most 3rd party developers aren't experienced, just building a token bucket to not break the API limit is incredibly complex for them.

*A better idea would be to just build this natively, but that's an aside. I'm just using endswapping as an example.
Last edited by Sandaoguo on Wed Jun 15, 2022 7:21 pm, edited 4 times in total.

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Wed Jun 15, 2022 9:45 pm

Sandaoguo wrote:Things that are simply not feasible to do through the API alone with the low rate limits should be high priority, for example. An endpoint that returns all endorsable nations in a region for X input would replace the endoswapping tools that rely on a complex combo of daily dumps, rate-limited API calls, or page scraping.* In the past, we've basically been told that status quo is fine, but it's that kind of necessary added complexity that turns people off from using the API.

Sorry to zero in on this part and ignore your questions, but I'm not proposing to add endpoints that offer functionality that isn't currently available on the HTML site either. That kind of thing might be looked at, especially where it lets everyone cut down the number of requests needed to do something, but it wouldn't be a high priority. The priority would be adding API support for things you currently do on the HTML site.

User avatar
Roavin
Admin
 
Posts: 1778
Founded: Apr 07, 2016
Democratic Socialists

Postby Roavin » Thu Jun 16, 2022 3:27 am

[violet] wrote:
Roavin wrote:Absolutely, plus there could be other ideas to efficiently transmit changes through the API (for example, a Websocket endpoint for activity log without requiring costly poll requests). This would make tooling better for sure, plus be better for servers, so I'm with you on this, that this could certainly be a desirable future for everybody involved.

I have already built a basic but functional event-based API for Happenings, which runs on Node and notifies listening clients via SSE. It would be approximately a million times better than the current situation of scripts polling for HTML multiple times per second.


I cannot understate how fantastic this is to hear.

[violet] wrote:
Roavin wrote:How would you categorize Telescope's Endorse button, which is injected for every nlink and calls endorse.cgi when clicked by the user?

I'm not familiar with that tool. If it's only adding dumb buttons that are found elsewhere on the page or site, it's maybe just restyling. If it's dynamically crafting URLs, or managing the sending and receiving of data, it's a script that could use the API.


Well, it's manually sending a XHR with form data to endorse.cgi, so the latter.
Helpful Resources: One Stop Rules Shop | API documentation | NS Coders Discord
About me: Longest serving Prime Minister in TSP | Former First Warden of TGW | aka Curious Observations

Feel free to TG me, but not about moderation matters.

User avatar
Sandaoguo
Diplomat
 
Posts: 541
Founded: Apr 07, 2013
Left-Leaning College State

Postby Sandaoguo » Thu Jun 16, 2022 8:01 am

[violet] wrote:
Sandaoguo wrote:Things that are simply not feasible to do through the API alone with the low rate limits should be high priority, for example. An endpoint that returns all endorsable nations in a region for X input would replace the endoswapping tools that rely on a complex combo of daily dumps, rate-limited API calls, or page scraping.* In the past, we've basically been told that status quo is fine, but it's that kind of necessary added complexity that turns people off from using the API.

Sorry to zero in on this part and ignore your questions, but I'm not proposing to add endpoints that offer functionality that isn't currently available on the HTML site either. That kind of thing might be looked at, especially where it lets everyone cut down the number of requests needed to do something, but it wouldn't be a high priority. The priority would be adding API support for things you currently do on the HTML site.

I think there's a real disconnect between what you believe devs are doing and what devs are actually doing. When it comes to endoswapping tools that pretty much every large region uses, those can be done solely with page scraping, it would require thousands of requests, but totally legal under the scripting rules. We try to use the API simply because you've requested that we do it. But the API is imperfect, and indeed its own documentation says it can't feasibly be used for these kinds of purposes. If you're going to ban HTML scripts and tell devs to use the API, then you need to be willing to add in API endpoints for common use cases. "What endorsable nations in this region is X not endorsing?" is a very common use case that doesn't have its own endpoint. In the past, 3rd party devs have been told it's too complicated for you to add it, because there's "too many fetches" to build the data. That can't be the answer, if you want to ban HTML scripts and tell everybody the API can do everything instead.

It's complicated for us to recreate this data in the first place, and we're likely using far more resources than you'd be using to build a single API endpoint. Using a mix of daily dumps, regular API calls, and page scripting is a hack-ish way of building an endoswapping tool. It's the perfect test case for everybody's concerns about API adoption. Using the daily dumps alone isn't feasible because users simply will not adopt it, the data inaccuracy is unacceptable to end users. You and Elu may think it's acceptable that end users "just wait a day" to pick up the missing endorse targets, and just deal with every instance of a nation no longer being in the same region or in the WA. But that's because you're viewing things solely from a development viewpoint and not from the perspective of an end user. At the end of the day, if the tool isn't being used because the limitations of the NS API (I'm including the daily dumps as part of the API for simplification), then developers aren't going to want to use the API. If you ban HTML scripts and don't create requested endpoints, then you're just hampering development entirely.

For the endoswapping test case, in order to utilize the daily dumps in an efficient and user-acceptable way, the tool has to be centrally delivered from a server. Asking end users to download the daily dumps every day in order to use a client tool isn't realistic. The consequence here is that a server-based tool is now limited to a single rate limit pool, unlike client tools where each client has their own rate limit. So endoswapping tools get the worst of all possible worlds, just so we can comply with the preference NS has to use the API and not page scraping. We get stuck with outdated daily dumps delivering day-old data and have to throttle end-users if we try supplementing that data with API calls.

Endoswapping is just one test case, but there are plenty more where we're going to have a ton of skepticism towards "the API will replace HTML scripts." If you're already squashing the idea of adding an endpoint for a common use case, because it's not "available on the HTML site", then that just multiplies the skepticism. We're writing these tools exactly because the functionality users want doesn't exist on the site there's little to no hope that they ever will. NS needs to either promote 3rd party development, make it easier and the rules clearer, with a better API that is responsive to 3rd party dev needs and requests, or NS needs to take the features 3rd party devs are creating and adopt them into the game natively.

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Thu Jun 16, 2022 6:42 pm

Sandaoguo wrote:If you're going to ban HTML scripts and tell devs to use the API, then you need to be willing to add in API endpoints for common use cases.

Right, but what I think you're proposing is that I not only add endpoints for data that's currently on the HTML site, but also write entirely new APIs providing new, more efficient ways to get data that doesn't currently exist anywhere -- and I should do this before anyone will switch to the API.

There's a big step up in admin time required once we start talking about building entirely new features. Each of those might take orders of magnitude longer than simply adding an API endpoint for something we already serve via HTML. So it's not something I can realistically commit to at the same time as managing a migration.

Reverse endos are a particularly weird case because it's hard for us to generate them, too. That's why there's no convenient way to figure out who you're endorsing in one fetch, no matter how you access it. It's inefficient via HTML, it's inefficient via API, it's inefficient on the back end. So obviously the ideal solution is that we fix this somehow, then serve that data in various different ways, including on the API -- but that's a substantial admin project. In the meantime, it doesn't have anything to do with whether scripts use the HTML site or the API for the data that is currently available.

User avatar
[violet]
Executive Director
 
Posts: 16207
Founded: Antiquity

Postby [violet] » Thu Jun 16, 2022 6:50 pm

So fyi here are what I think are some reasonable concerns people might have about being shifted to the API:
  • Admin might not actually get around to adding the API endpoints needed
  • The API Happenings delay might not be removed
  • Once admin can see what everyone's scripts are doing, new restrictive rules might be added
  • People might be outcompeted by fully-automated API bots
I suspect some of the above might be behind arguments that scripts are somehow technically unable to interact with api.nationstates.net rather than www. I want it known that it's fine to hold the above concerns, and we can discuss them.

User avatar
Refuge Isle
Technical Moderator
 
Posts: 1900
Founded: Dec 14, 2018
Left-wing Utopia

Postby Refuge Isle » Thu Jun 16, 2022 8:06 pm

[violet] wrote:So fyi here are what I think are some reasonable concerns people might have about being shifted to the API:
  • Admin might not actually get around to adding the API endpoints needed
  • The API Happenings delay might not be removed
  • Once admin can see what everyone's scripts are doing, new restrictive rules might be added
  • People might be outcompeted by fully-automated API bots
I suspect some of the above might be behind arguments that scripts are somehow technically unable to interact with api.nationstates.net rather than www. I want it known that it's fine to hold the above concerns, and we can discuss them.

For r/d purposes, a happenings delay is non-functional. Raiders can move into a region within a second of precision of when it updates without any kind of tools except competently interpreting the data dump hours before update begins. Learning about a raid 28 seconds after the fact or what have you is unworkable when the reaction window is outrageously small. We would not even attempt to use API happenings, but a composition of calls to the nation, region, and WA API whose shards impose a lesser arbitrary penalty, accomplishing the same objective with more calls. So there's really no reason to keep the delay.

To further reiterate my concerns stated previously, A 50/30s rate limit is equally unworkable if the status quo is refreshing 10 times a second looking for that activity. As wins and losses are already bottlenecked by human reaction time, restricting them further by severely limiting how many times you can refresh puts us back at square one where even manually refreshing an HTML reports page a billion times would improve our chances of success.
Last edited by Refuge Isle on Thu Jun 16, 2022 8:07 pm, edited 1 time in total.

PreviousNext

Advertisement

Remove ads

Return to Technical

Who is online

Users browsing this forum: Dharmasya, Ebonhand, Kractero, Tessen, Tolfaer, Wolfana

Advertisement

Remove ads