NTCIR Actionable Knowledge Graph (AKG) Task

Task Description

In this pilot task, we set two subtasks to achieve and advance the technologies related to actionable knowledge graph presentations that can be used for search engines.


You can participate either of subtasks or both!

Action Mining Subtask (AM):

Input: Entity, Entity Type, and Wikipedia URL
Output: A ranked list of actions

For a given entity type (e.g., Place) and instance entity (e.g., "poland"), participants will be asked to find potential actions that can be taken (e.g., "visit poland", "buy a house in poland", "find weather in poland"). Participants are allowed to use any external resources (e.g., Action section in schema.org) to return a ranked list of potential actions. Up to 100 actions should be submitted for each query (pair of entity and its type). The format of each submitted action should contain verb and object (called also the modifier of a verb), where the object's length is limited to 50 characters. For example, in the above-mentioned case of "poland" as the entity instance of the Place type, "buy a house in poland" would be an action composed of a verb "buy" and object "a house in poland". Participants are allowed to submit up to three actions that share the same verb. Note that actions can sometimes lack objects (actions containing only a verb), in which case, the object is actually considered NULL. For example, for the entity "outlook express" of the type Product, "download" is an example of correct action that does not require any explicit object specified.

This subtask can be seen as an open information extraction task, and allows us to accumulate a comprehensive set of actions that are related to a given entity type and entity instance. The returned actions will be assessed by crowdsourcing to be scored from 1 to 5. Actions will be evaluated not only based on their relevance but also based on diversity in order to prevent submissions of many similar actions (e.g., actions where verbs are synonyms or the objects have very similar meaning).

Sample data

Entity Types Verb Object Wikipedia
Final Fantasy VIII Product play on android https://en.wikipedia.org/wiki/Final_Fantasy_VIII
watch videos of other players
buy new weapons
compare with other games
learn junction system
Zambia Place mine copper https://en.wikipedia.org/wiki/Zambia
prospect for minerals
produce row crops
visit Kafue National Park
watch national football games
Yo-Yo Ma Person transliterate his name in Chinese https://en.wikipedia.org/wiki/Yo-Yo_Ma
list cellos he has ever owned
watch his performance at Apple keynotes
listen The Goat Rodeo Sessions
buy a tickt of his concert
York University Organisation visit York University Observatory https://en.wikipedia.org/wiki/York_University
apply for an undergraduate program
defer an offer
stay in a residence
read Excalibur (university newspaper)
Wireles Festival Event sponsor the event https://en.wikipedia.org/wiki/Wireless_Festival
buy a ticket for one day
camp at the site
contact for press enquiries
reserve a big green coach

  • Number of runs one group can submit: 3
  • Number of actions one run can contain per entity query: 100
  • Depth of pool for relevance assessments per entity query: 20

Actionable Knowledge Graph Generation Subtask (AKGG):

Input: Query, Entity Type, Entity, and Action
Output: A ranked list of attributes of the type

For a given search query, entity included in that query, the type of the entity, and action (e.g., "kyoto budget travel", "kyoto", location, "visit a temple"), participants will be asked to rank entity properties based on their relevance to the query. The query (input) can be ambiguous as in realistic search queries, and participants need to return the ranked list of relevant entity properties to create an actionable knowledge graph. Actions in the test queries will be taken from the outcomes of the Action Mining (AM) Subtask. Properties to be returned will be those defined as attributes of the entity type in schema.org vocabulary.

Effectiveness of the returned properties will be judged by pair-wise comparison performed by crowdsourcing for all pairs of submitted runs. We are currently considering to propose a new metric that can take preference based judgements into account, by applying Bradley-Terry model.


Participants may use any external sources for the subtasks. Potential document collection and knowledge base are here and here.

Sample data

Query Entity Types Action Ranked properties
request funding funding Thing, Action request funding agent
result
object
location
participant
startTime
error
instrument
actionStatus
endTime
target
consequences of flood flood Thing, Event live in a flood area actor
location
duration
startDate
endDate
subEvent
aggregateRating
attendee
composer
contributor
director
How to use google maps Google maps Thing, Intangible, Service create a google maps mashup availableChannel
serviceOutput
brand
provider
serviceType
logo
isSimilarTo
isRelatedTo
audience
aggregateRating
review
areaServed
award
category
hasOfferCatalog
hoursAvailable
offers
providerMobility
caring about infant infant Thing, Person take care of new born baby parent
gender
birthDate
weight
height
birthPlace
sibling
givenName
goat meat for bbq goat meat Thing, Product cook on the bbq or grill productionDate
weight
manufacturer
aggregateRating
brand
width
...
  • Number of runs one group can submit: 3
  • Number of attributes one run can contain per input: 20
  • Depth of pool for relevance assessments per input: 20