Automated Content Access Protocol Explained

Automated Content Access Protocol ("ACAP") was proposed in 2006 as a method of providing machine-readable permissions information for content, in the hope that it would have allowed automated processes (such as search-engine web crawling) to be compliant with publishers' policies without the need for human interpretation of legal terms. ACAP was developed by organisations that claimed to represent sections of the publishing industry (World Association of Newspapers, European Publishers Council, International Publishers Association).[1] It was intended to provide support for more sophisticated online publishing business models, but was criticised for being biased towards the fears of publishers who see search and aggregation as a threat[2] rather than as a source of traffic and new readers.

Status

In November 2007 ACAP announced that the first version of the standard was ready. No non-ACAP members, whether publishers or search engines, have adopted it so far. A Google spokesman appeared to have ruled out adoption.[3] In March 2008, Google's CEO Eric Schmidt stated that "At present it does not fit with the way our systems operate".[4] No progress has been announced since the remarks in March 2008 and Google,[5] along with Yahoo! and MSN, have since reaffirmed their commitment to the use of robots.txt and sitemaps.

In 2011 management of ACAP was turned over to the International Press Telecommunications Council and announced that ACAP 2.0 would be based on Open Digital Rights Language 2.0.[6]

Previous milestones

In April 2007 ACAP commenced a pilot project in which the participants and technical partners undertook to specify and agree various use cases for ACAP to address. A technical workshop, attended by the participants and invited experts, has been held in London to discuss the use cases and agree next steps.

By February 2007 the pilot project was launched and participants announced.

By October 2006, ACAP had completed a feasibility stage and was formally announced[7] at the Frankfurt Book Fair on 6 October 2006. A pilot program commenced in January 2007 involving a group of major publishers and media groups working alongside search engines and other technical partners.

ACAP and search engines

ACAP rules can be considered as an extension to the Robots Exclusion Standard (or "robots.txt") for communicating website access information to automated web crawlers.

It has been suggested[8] that ACAP is unnecessary, since the robots.txt protocol already exists for the purpose of managing search engine access to websites. However, others[9] support ACAP’s view[10] that robots.txt is no longer sufficient. ACAP argues that robots.txt was devised at a time when both search engines and online publishing were in their infancy and as a result is insufficiently nuanced to support today’s much more sophisticated business models of search and online publishing. ACAP aims to make it possible to express more complex permissions than the simple binary choice of “inclusion” or “exclusion”.

As an early priority, ACAP is intended to provide a practical and consensual solution to some of the rights-related issues which in some cases have led to litigation[11] [12] between publishers and search engines.

The Robots Exclusion Standard has always been implemented voluntarily by both content providers and search engines, and ACAP implementation is similarly voluntary for both parties.[13] However, Beth Noveck has expressed concern that the emphasis on communicating access permissions in legal terms will lead to lawsuits if search engines do not comply with ACAP permissions.[14]

No public search engines recognise ACAP. Only one, Exalead, ever confirmed that they will be adopting the standard,[15] but they have since ceased functioning as a search portal to focus on the software side of their business.

Comment and debate

The project has generated considerable online debate, in the search,[16] content[17] and intellectual property[18] communities. If there are any common themes in commentary, they are

  1. that keeping the specification simple will be critical to its successful implementation, and
  2. that the aims of the project are focussed on the needs of publishers, rather than readers. Many have seen this as a flaw.[19]

See also

External links

Further reading

Notes and References

  1. http://www.the-acap.org/FAQs.php#faq15 ACAP FAQ: Where is the driving force behind ACAP?
  2. Web site: Acap: a shot in the foot for publishing . https://web.archive.org/web/20091114081002/http://blogs.telegraph.co.uk/technology/iandouglas/3624601/Acap_a_shot_in_the_foot_for_publishing/ . dead . 14 November 2009 . Ian . Douglas . 3 December 2007 . . 3 May 2012.
  3. http://blog.searchenginewatch.com/blog/080313-090443 Search Engine Watch report of Rob Jonas' comments on ACAP
  4. Web site: ACAP content protection protocol "doesn't work" says Google CEO . Corner . Stuart . March 18, 2008 . iTWire . March 11, 2018 .
  5. http://googlewebmastercentral.blogspot.com/2008/06/improving-on-robots-exclusion-protocol.html Improving on Robots Exclusion Protocol: Official Google Webmaster Central Blog
  6. http://www.iptc.org/site/Home/Media_Releases/News_syndication_version_of_ACAP_ready_for_launch_and_management_handed_over_to_the_IPTC IPTC Media Release: News syndication version of ACAP ready for launch and management handed over to the IPTC
  7. http://www.the-acap.org/press_releases/Frankfurt_acap_press_release_6_oct_06.pdf Official ACAP press release announcing project launch
  8. http://googlesystem.blogspot.com/2006/09/news-publishers-want-full-control-of.html News Publishers Want Full Control of the Search Results
  9. Web site: Why you should care about Automated Content Access Protocol . October 16, 2006 . yelvington.com . March 11, 2018 . https://web.archive.org/web/20061111015733/http://www.yelvington.com/20061016/why_you_should_care_about_automated_content_access_protocol . November 11, 2006 .
  10. Web site: FAQ: What about existing technology, robots.txt and why? . ACAP . March 11, 2018 . https://web.archive.org/web/20180308070121/http://www.the-acap.org/FAQs.php#faq6 . March 8, 2018 . live .
  11. http://www.out-law.com/page-7427 "Is Google Legal?" OutLaw article about Copiepresse litigation
  12. http://media.guardian.co.uk/newmedia/comment/0,,2013051,00.html Guardian article about Google's failed appeal in Copiepresse case
  13. Paul . Ryan . A skeptical look at the Automated Content Access Protocol . Ars Technica . 14 January 2008 . 9 January 2018.
  14. Web site: Noveck . Beth Simone . Automated Content Access Protocol . Cairns Blog . 1 December 2007 . 9 January 2018.
  15. http://www.exalead.com/software/news/press-releases/2007/07-01.php Exalead Joins Pilot Project on Automated Content Access
  16. http://blog.searchenginewatch.com/blog/060922-104102 Search Engine Watch article
  17. http://shore.com/commentary/newsanal/items/2006/200601002publishdrm.html Shore.com article about ACAP
  18. http://www.ip-watch.org/weblog/index.php?p=408&res=1280_ff&print=0 IP Watch article about ACAP
  19. Web site: Acap shoots back . Ian . Douglas . 23 December 2007 . . dead . https://web.archive.org/web/20080907233655/http://blogs.telegraph.co.uk/technology/iandouglas/jan2008/acapshootsback.htm . 7 September 2008 .