HomeSort by relevance Sort by last modified time
    Searched refs:Crawler (Results 1 - 4 of 4) sorted by null

  /external/okhttp/samples/crawler/src/main/java/com/squareup/okhttp/sample/
Crawler.java 42 public final class Crawler {
49 public Crawler(OkHttpClient client) {
126 System.out.println("Usage: Crawler <cache dir> <root>");
137 Crawler crawler = new Crawler(client); local
138 crawler.queue.add(new URL(args[1]));
139 crawler.parallelDrainQueue(threadCount);
  /external/chromium_org/chrome/tools/
webforms_aggregator_tests.py 40 """Verify that the crawler is able to find a reg page from the given URL."""
41 c = webforms_aggregator.Crawler(self.url2)
43 c.Run(), msg='Crawler could not find the reg page of "%s"' % self.url2)
46 """Verify the threaded crawler finds reg page from a file of URLs."""
50 msg='Threaded crawler could not find the reg page from the URLs file')
webforms_aggregator_unittests.py 39 the aggregator crawler.</p>
274 """Verify that the crawler is able to find a reg page from the given URL."""
275 c = webforms_aggregator.Crawler(self.url1)
277 c.Run(), msg='Crawler could not find the reg page of "%s"' % self.url1)
280 """Verify that the crawler won't find a non existent reg page
282 c = webforms_aggregator.Crawler(self.url2)
285 msg='Crawler found a non existent reg page of "%s"' % self.url1)
288 """Verify the threaded crawler finds reg page from a file of URLs."""
292 msg='Threaded crawler could not find the reg page from the URLs file')
webforms_aggregator.py 74 multi object of the crawler object so that the corresponding pages gets
313 class Crawler(object):
328 """Init crawler URL, links lists, logger, and creates a cookie temp file.
371 """Deletes cookie file when Crawler instances are destroyed."""
511 Links are copied to the links list of the crawler object, which holds all
512 the links found from all retrievers that the crawler object created. The
513 Crawler object exists as far as a specific site is examined and the
538 """Runs the Crawler.
548 If no registration page is found, the Crawler object will give up its try
604 url: will be used as an argument to create a Crawler object later
    [all...]

Completed in 84 milliseconds