You are trying to calculate the join of the three triple patterns. Papers on join implementation over Apache Hadoop will be useful background.
It may helpful to look at Apache Spark and the Resilient Distributed Dataset (RDD) concept.
It is also important to consider likely selectivity of each pattern - as Joshua says, the "pages" pattern may well be yield a unique solution and using that to simply lookup each of "name" and "volume" is not a demanding task.
ARQ's in-memory algorithm is not aiming for maximum independent parallelism which is what you want on Hadoop. Merge joins (or sort-merge joins) make two parallelizable accesses to the data.
You can extend ARQ at the basic pattern level or at the whole algebra execution level, or any point in between, by extends class