Expertise finding explained

Expertise finding is the use of tools for finding and assessing individual expertise. In the recruitment industry, expertise finding is the problem of searching for employable candidates with certain required skills set. In other words, it is the challenge of linking humans to expertise areas, and as such is a sub-problem of expertise retrieval (the other problem being expertise profiling).[1]

Importance of expertise

It can be argued that human expertise[2] is more valuable than capital, means of production or intellectual property. Contrary to expertise, all other aspects of capitalism are now relatively generic: access to capital is global, as is access to means of production for many areas of manufacturing. Intellectual property can be similarly licensed. Furthermore, expertise finding is also a key aspect of institutional memory, as without its experts an institution is effectively decapitated. However, finding and "licensing" expertise, the key to the effective use of these resources, remain much harder, starting with the very first step: finding expertise that you can trust.

Until very recently, finding expertise required a mix of individual, social and collaborative practices, a haphazard process at best. Mostly, it involved contacting individuals one trusts and asking them for referrals, while hoping that one's judgment about those individuals is justified and that their answers are thoughtful.

In the last fifteen years, a class of knowledge management software has emerged to facilitate and improve the quality of expertise finding, termed "expertise locating systems". These software range from social networking systems to knowledge bases. Some software, like those in the social networking realm, rely on users to connect each other, thus using social filtering to act as "recommender systems".

At the other end of the spectrum are specialized knowledge bases that rely on experts to populate a specialized type of database with their self-determined areas of expertise and contributions, and do not rely on user recommendations. Hybrids that feature expert-populated content in conjunction with user recommendations also exist, and are arguably more valuable for doing so.

Still other expertise knowledge bases rely strictly on external manifestations of expertise, herein termed "gated objects", e.g., citation impacts for scientific papers or data mining approaches wherein many of the work products of an expert are collated. Such systems are more likely to be free of user-introduced biases (e.g., ResearchScorecard), though the use of computational methods can introduce other biases.

There are also hybrid approaches which use user-generated data (e.g., member profiles), community-based signals (e.g., recommendations and skill endorsements), and personalized signals (e.g., social connection between searcher and results).

Examples of the systems outlined above are listed in Table 1.

Table 1: A classification of expertise location systems

TypeApplication domainData sourceExamples
Social networkingProfessional networkingUser-generated and community-generated
Scientific literatureIdentifying publications with strongest research impactThird-party generated
Scientific literatureExpertise searchSoftware
Knowledge basePrivate expertise databaseUser-Generated
  • MITRE Expert Finder (MITRE Corporation)
  • MIT ExpertFinder (ref. 3)
  • Decisiv Search Matters & Expertise (Recommind, Inc.)
  • ProFinda (ProFinda Ltd)
  • Skillhive (Intunex)
  • Tacit Software (Oracle Corporation)
  • GuruScan (GuruScan Social Expert Guide)
Knowledge basePublicly accessible expertise databaseUser-generated
Knowledge basePrivate expertise databaseThird party-generated
  • MITRE Expert Finder (MITRE Corporation)
  • MIT ExpertFinder (ref. 3)
  • MindServer Expertise (Recommind, Inc.)
  • Tacit Software
Knowledge basePublicly accessible expertise databaseThird party-generated
  • ResearchScorecard (ResearchScorecard Inc.)
  • authoratory.com
  • BiomedExperts (Collexis Holdings Inc.)
  • KnowledgeMesh (Hershey Center for Applied Research)
  • Community Academic Profiles (Stanford School of Medicine)
  • ResearchCrossroads.org (Innolyst, Inc.)
Blog search enginesThird party-generated

Technical problems

A number of interesting problems follow from the use of expertise finding systems:

Expertise ranking

Means of classifying and ranking expertise (and therefore experts) become essential if the number of experts returned by a query is greater than a handful. This raises the following social problems associated with such systems:

Sources of data for assessing expertise

Many types of data sources have been used to infer expertise. They can be broadly categorized based on whether they measure "raw" contributions provided by the expert, or whether some sort of filter is applied to these contributions.

Unfiltered data sources that have been used to assess expertise, in no particular ranking order:

Filtered data sources, that is, contributions that require approval by third parties (grant committees, referees, patent office, etc.) are particularly valuable for measuring expertise in a way that minimizes biases that follow from popularity or other social factors:

Approaches for creating expertise content

Collaborator discovery

In academia, a related problem is collaborator discovery, where the goal is to suggest suitable collaborators to a researcher. While expertise finding is an asynchronous problem (employer looking for employee), collaborator discovery can be distinguished from expertise finding by helping establishing more symmetric relationships (collaborations). Also, while in expertise finding the task often can be clearly characterized, this is not the case in academic research, where future goals are more fuzzy.[4]

Further reading

  1. Ackerman, Mark and McDonald, David (1998) "Just Talk to Me: A Field Study of Expertise Location" Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work.
  2. Hughes, Gareth and Crowder, Richard (2003) "Experiences in designing highly adaptable expertise finder systems" Proceedings of the DETC Conference 2003.
  3. Maybury, M., D'Amore, R., House, D. (2002). "Awareness of organizational expertise." International Journal of Human-Computer Interaction 14(2): 199-217.
  4. Maybury, M., D'Amore, R., House, D. (2000). Automating Expert Finding. International Journal of Technology Research Management. 43(6): 12-15.
  5. Maybury, M., D'Amore, R, and House, D. December (2001). Expert Finding for Collaborative Virtual Environments. Communications of the ACM 14(12): 55-56. In Ragusa, J. and Bochenek, G. (eds). Special Section on Collaboration Virtual Design Environments.
  6. Maybury, M., D'Amore, R. and House, D. (2002). Automated Discovery and Mapping of Expertise. In Ackerman, M., Cohen, A., Pipek, V. and Wulf, V. (eds.). Beyond Knowledge Management: Sharing Expertise. Cambridge: MIT Press.
  7. Mattox, D., M. Maybury, et al. (1999). "Enterprise expert and knowledge discovery". Proceedings of the 8th International Conference on Human-Computer Interactions (HCI International 99), Munich, Germany.
  8. Tang, J., Zhang J., Yao L., Li J., Zhang L. and Su Z.(2008) "ArnetMiner: extraction and mining of academic social networks" Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining.
  9. Viavacqua, A. (1999). "Agents for expertise location". Proceedings of the 1999 AAAI Spring Symposium on Intelligent Agents in Cyberspace, Stanford, CA.

Notes and References

  1. 10.1561/1500000024. Expertise Retrieval. Foundations and Trends in Information Retrieval. 6. 2–3. 127–256. 2012. Balog. Krisztian.
  2. Web site: What Does Being a Strategic HR business Partner Look Like in Practice? . Njemanze . Ikenna . 2016 . August 21, 2022 . https://web.archive.org/web/20180621194045/https://digitalcommons.ilr.cornell.edu/cgi/viewcontent.cgi?referer=https://www.google.com.ph/&httpsredir=1&article=1109&context=student . June 21, 2018.
  3. Book: 10.1109/BigData.2015.7363878. Personalized expertise search at Linked In. 2015 IEEE International Conference on Big Data (Big Data). 1238–1247. 2015. Ha-Thuc. Viet. Venkataraman. Ganesh. Rodriguez. Mario. Sinha. Shakti. Sundaram. Senthil. Guo. Lin. 978-1-4799-9926-2. 1602.04572. 12751245 .
  4. 10.1145/2147783.2147785. 24376309. 3872832. Conceptualizing and advancing research networking systems. ACM Transactions on Computer-Human Interaction. 19. 1. 1–26. 2012. Schleyer. Titus. Butler. Brian S.. Song. Mei. Spallek. Heiko.