Conversations – Marco Amicucci interviews Carlo Tasso of Udine University M. A.: I would very much like to talk to you about artificial intelligence, a topic that is often not addressed by those involved in e-learning, although in recent years there have been interesting developments. What direction is the research on this topic taking? C. T.:[…] Artificial intelligence is a subject that was established in the early years of the history of computing. There were companies such as IBM, who had built a solid foundation for their business in relation to organisation, but used applications that were based on the storage of large amounts of data and processing that was more or less sophisticated. The computer was not yet interactive but worked on the data it processed in order to produce results. In the second half of the fifties some people began to think that, in addition to a number crunching approach, there could be a computing use that would relate to simulation, trying to deal in terms of simulation – the emulation of the functions that have traditionally been regarded as proprietary of the human mind: cognitive abilities, such as natural language understanding and analysis of texts. During the Cold War, the American Defence Ministry funded research in the field of machine translation. The success it achieved was not in line with the amount of effort […] It was not a question of putting together parts translated word for word, this is not the way to translate a text, this is not the right way to correctly express a foreign language there was a realisation that it was important to analyse the text, not so much in superficial terms, but which related to morphology or syntax, as well as semantics. What was happening in those years in terms of e-learning was dealing with issues such as language learning rather than the study of arithmetic. A sharp distinction was perceived between the traditional approach, which proposed pre-defined questions with pre-defined answers, and the need to try and build a smarter model. […] M. A.: Help us a bit to use our imagination. Help us imagine a possible future application of these technologies in education, because obviously it’s something we’re working on, so I would like to ask you, where will it take us? What will it most likely be used for? C.T.: I can tell you in particular what I know from my field of expertise. Since the creation of the web these techniques regarding the semantic analysis of texts have been adopted, trying to analyse what the concepts are, developing customised systems in regard to searching for information. The web was established in the early nineties, developing in an even more striking manner in the early two thousand in the second half of the nineties it was already clear that the mass of information, its quality and contents, were interesting. Somehow there were automatic interfaces, to search for information on subjects individual users were interested in, paying attention, through these automated tools, to actual information needs otherwise it would have been humanly impossible to check all the information that is generated by or published on the network. […] We all know that when non-IT professionals look for information on the web, the typical accuracy of the results, after entering two or three keywords in the search engine, is fifteen to twenty percent. The words, as sequences of letters or character strings, which we use in speech, writing, when talking to each other, often have more than one meaning they are polysemous depending on the context or the dialogue in which they are contained, they can have different meanings. The example I always use with my students is the word “rate”, which in itself means many things, from a poetic to an economic concept, things that are totally different. If they are not contextualised, it is not possible to be precise when providing the desired results. So what needs to be done? It is necessary to increase the amount of information the user provides, the analysis of the meaning and documents. It is not sufficient to compare keywords, it is necessary to think in terms of identifying and reasoning on the meanings. We have worked on these issues, we are still working on these issues, with a focus on searching for information. As part of an educational process, when a student is learning subjects and is faced with a learning object, or a page of a document, there is little information about the subject. The entire web could be a back-up of additional resources for learning, limited to the student’s ability to search for them. […] M. A.: Will the content provided to learners be customised with software that is able to understand what they are studying at the moment? C.T.: The idea is that if you or I were to study a subject, focusing on different aspects, our concepts extractor will produce slightly different solutions, customising the result of the interaction, because it started from two different perspectives: if you are studying the mountains of Friuli Venezia Giulia, analysing the economic aspects while I am studying the socio-demographic aspects, you may be interested in food and wine or forestry, while I will highlight other aspects of the same concept, so different results and items of interest will be the outcome. M. A.: When we talk about artificial intelligence, do you imagine a subject that focuses only on semantic analysis? Or are there also many other subjects involved? C.T.: Artificial intelligence, defined as I defined it at the beginning of my talk, is a computer science subject that tries to analyse, build and replicate, even from an experimental perspective, the capabilities deemed “intelligent”, like our cognitive skills (analysis of language, reasoning) like the signals that come from our senses (sight, hearing, touch) like the ability to construct and understand an environment (movement, manipulation, image interpretation and artificial vision). […] In recent years this has led to a wider application, and therefore interests: opportunities have been perceived, and consequently investments and results. We are entering this world, where traditional techniques are used in a big data context, producing results which greatly affect all of us. Who knows how many things that we are currently experiencing are the result of a big data analysis? Who knows, for example, whether the contract my daughter wishes to sign with a phone company is the result of a big data analysis? […] M. A.: One thing that would be interesting to understand is this: given that we are not talking about a subject that can provide a software or perform the functions applicable to any technical scenario, do you envision a future in which there will be pre-packaged artificial intelligence solutions? C.T.: There are now various frameworks (on an open source platform) that allow the performance of automatic classification activities in regard to texts and to identify subject categories they are used to develop systems based on these basic modules. In regard to vision, the fact of having sensors coupled to algorithms and neural networks on the chip of a camera, to recognise things that are required by the machine, is the result of an instruction, for example: given the manner x of a person to pass through a certain point y, the camera signals the next time x occurs. Or one can also imagine a system that classifies the content of sensory analysis, through a beam of light instructed to recognise different substances of a food, measuring the percentage. The result will be a training carried out with Machine Learning tools. Now there are a great number of start-ups, specific initiatives, recommendation systems related to tourism. There are modules already inserted in the applications we use. I would say this case does not include buying a package or a closed system on a turnkey basis. […] Staff skilla

written by: Staff skilla , 31 March 2016

May also be of interest

© Copyright 2024 Amicucci Formazione | P.IVA 01405830439 | Cap. Soc.: Euro 100.000,00 (i.v.) | C.C.I.A.A. (Macerata) | R.E.A. (149815) | Privacy policy | Cookie policy | Etica e compliance