Seems like AngularJS is perfect for something like that, though I don’t know if theres a technical name for that functionality. The classic way is to have some sort of index marker, so you know how deep into your list you are. The server calculates all the articles ordered in date, and has it stored into memory (or in a database, able to be loaded, whatever the case may be), and then when you make the first call, it just returns the first 10. When you have the “Load More” menu option, you just pass to the call what set of indexes you want. So the first call you’d pass 0 (or 1, up to you), and then increment it by 1 for the next call. So the server says, “I actually want the next 10 articles” and it knows which 10 to pull out of its array.
In Angular, you can just get the results from the server back, and then append them to the array storing all the existing articles, and it should automatically insert them as soon as the data object binded to the display is updated.
The downside to this is that if the source object is updated (i.e. a new article is loaded and stored in the server), then when you retrieve the next set of 10, it may end up bringing back duplicate articles (i.e. article 10, loaded in the first load, is now stored as article 11, so it comes back when you click “Load More”), so solving for those sort of concurrency issues is a bit harder, though likely small in the grand scheme of things. Sites like reddit build their cache and then seem to store multiple caches over time, and identify which one the user is working on in the url request (Clicking next shows something like https://www.reddit.com/?count=25&after=t3_5uxq5e which means give me the next set of artciles starting after 25 depending on the timestamp encoded as t3_5uxq5e). If you ever dig too deep into reddit, you’ll notice you’ll get a blank front page, this is because the time that you started digging is no longer cached, so they have no idea how to show you the data for what you need to see.
If you look at shoryuken, for instance, it loads from a url like http://shoryuken.com/page/3/ where 3 indicates you want the 3rd set of 10 articles. Try putting some huge number, like 1000, and see its rebuilt as far back as 5 years ago.
And then, try putting negative numbers and see what happens. For some reason, it actually loads up “random” articles instead of a page listing, that’s pretty interesting lol
edit: It seems to search for articles based on your keyword after page and bring back the best matching one it can, so searching for shoryuken.com/page/test brings up “http://shoryuken.com/2016/02/17/test-street-fighter-vs-netplay-during-playstation-networks-free-multiplayer-weekend/”
edit 2: It seems I took too much time to explain what you said you were already doing, so disregard lol