I've included Facebook comments on my posts here since 2011, as a way to make it easier for people to follow the discussion without using Facebook. I initially implemented this via a Facebook app running as me. This worked fine, until Facebook's recent app restrictions in response to Cambridge Analytica.
The information I'm trying to include here, however, is fully public: if you follow a link to an example Facebook crosspost while not logged into Facebook you can still read the comments. So I've switched from using the API, with its privleges to read anything I can read, to just scraping the public-facing page.
This has two components:
Make a request in a javascript-running browser in order to get the temporary tokens I need to FB to allow my request. I tried to use Selenium for this, but the tiny VPS I host this blog on has too little memory to run a browser. So I use the WebPageTestAPI instead. I have this set to run automatically each night, getting a single request, via this python script as a cron job.
When trying to load comments, use those saved tokens to make the same kind of AJAX request Facebook's front end makes. This happens in response to a user viewing a post with comments, and is in this script.
(This is how my Google Plus integration has worked from the beginning, except that it doesn't require any tokens and so only needs the second stage.)
Comment via: google plus, facebook