Over the last few months, the GDL team has been experimenting with AI, or more specifically, integrating the GDL platform with different AI and cognitive services. We are not ourselves developing AI components as this is not our primary focus as a project, but we are integrating the GDL and the open educational resources in our platform with AI components that offer APIs.
We are using AI only when it helps us reach our project goals, and in this initial phase, we think AI can help us reach new users on emerging platforms and enhance the quality of content and metadata.
In our prototyping we have started exploring:
- Google Assistant
- Microsoft Cognitive Service, for spell checking and language processing at a later stage.
- Image-processing algorithms(only in planning) to smartly identify, caption, index and ad alt-text to pictures. In this case, we might use MS Cognitive Services or Cloudinary.
Can I speak to the GDL?
The first real output from our work will be an integration with Google Voice Assistant to facilitate a simple conversation between the end-user and the GDL platform. This will allow users to ask the Google Assistant to read books, search for books and list books from different levels, only using their voice in a “conversation” with the GDL.
The GDL app on Google Assistant will be launched with support for English in the first release later this month.
Platform agnostic and vendor independent
The Global Digital Library is focusing on developing a platform that provides access to free, high-quality, early grade reading resources in languages that children use and understand. Our technical development is focused on creating a user-friendly system that requires little or no technical skills for users to access books and games on the platform.
For us, it is crucial that the GDL is platform and device agnostic in the sense that we create a service that will be accessible for any user on most common platforms and devices like smartphones, computers, wearables or tablets. Wearables are not in any way our focus, but more an example that we must create content today that we expect will be used on devices in the future that we have not yet seen in the market. We also work to ensure that the content and core reading experience from the GDL platform is provided totally vendor independent.
In the GDL project, we do this by:
- Developing the core part of our platform web(HTML), and not platform-dependent apps for iOS or Android
- All content is accessible through APIs and an OPDS feed, allowing others to develop their own services with GDL content
- Integrate with platforms like Google Assistant and Microsoft cognitive services without being locked to these platforms.