Using PostgreSQL to store URLs for a web crawler

Поиск
Список
Период
Сортировка
От Simon Connah
Тема Using PostgreSQL to store URLs for a web crawler
Дата
Msg-id 0d3a8c63-55c7-5250-c52a-4244d18e6c92@gmail.com
обсуждение исходный текст
Ответы Re: Using PostgreSQL to store URLs for a web crawler  (Fabio Pardi <f.pardi@portavita.eu>)
Список pgsql-novice
First of all apologies if this is a stupid question but I've never used 
a database for something like this and I'm not sure if what I want is 
possible with PostgreSQL.

I'm writing a simple web crawler in Python and the general task it needs 
to do is first get a list of domain names and URL paths from the 
database and download the HTML associated with that URL and save it in 
an object store.

Then another process goes through the downloaded HTML and extracts all 
the links on the page and saves it to the database (if the URL does not 
already exist in the database) so the next time the web crawler runs it 
picks up even more URLs to crawl. This process is exponential so the 
number of URLs that are saved in the database will grow very quickly.

I'm just a bit concerned that saving so much data to PostgreSQL quickly 
would cause performance issues.

Is this something PostgreSQL could handle well? I'm not going to be 
running PostgreSQL server myself. I'll be using Amazon RDS to host it.


В списке pgsql-novice по дате отправления:

Предыдущее
От: Aleksey Tsalolikhin
Дата:
Сообщение: Re: [NOVICE] How to list partitions of a table in PostgreSQL 10
Следующее
От: Fabio Pardi
Дата:
Сообщение: Re: Using PostgreSQL to store URLs for a web crawler