CCKS 2018 | 前沿技术讲习班

640?wx_fmt=jpeg


时间:814-15
地点:南开大学泰达学院大报告厅

日程安排

时间主题特邀讲者
8月14日上午(8:30 – 10:00)Deep Knowledge Graph Reasoning

(10:30-12:00)Exploiting and Reasoning With Open Knowledge Graph

William Wang

Jeff Pan

8月14日下午(13:30-17:00)Deep Learning for Natural Language InferenceXiaodan Zhu
8月15日上午(8:30-12:00) Semantic Relation Extraction from TextPreslav Nakov
8月15日下午(13:30 – 15:00)特定领域知识图谱的构建及应用案例

(15:30-17:00)语义计算与知识问答技术在实际场景中的应用

张伟

刘权

 

T1

640?wx_fmt=pngTitle: Deep Knowledge Graph Reasoning

Time:8:30 – 10:00

Abstract: Learning to reason and understand the world’s knowledge is a fundamental problem in Artificial Intelligence (AI). The core research question that I will address in this tutorial is the following: how can we design scalable statistical learning and inference methods to operate over rich knowledge representations? In this tutorial, I will describe some recent studies on learning to reason in large scale knowledge graphs (KGs). More specifically, I will introduce both path-based and embedding-based approaches. Then, I will introduce DeepPath, a novel deep reinforcement learning framework for learning multi-hop relational paths: it uses a policy-based agent with continuous states based on knowledge graph embeddings, which reasons in a KG vector space by sampling the most promising relation to extend its continuous states based on knowledge graph embeddings, which reasons in a KG vector space by sampling the most promising relation to extend its path. To conclude, I will also some of our initial attempts of bridging path-finding and path-reasoning in a principled variational inference framework.

Bio: William Wang is the Director of the Natural Language Processing Group (http://nlp.cs.ucsb.edu/) and an Assistant Professor in Computer Science at University of California, Santa Barbara. He received his PhD from School of Computer Science, Carnegie Mellon University. He has broad interests in machine learning approaches to data science, including statistical relational learning, information extraction, computational social science, speech, and vision. He has published more than 50 papers at leading conferences and journals, and received best paper awards (or nominations) at ASRU 2013, CIKM 2013, and EMNLP 2015, a best reviewer award at NAACL 2015, an IBM Faculty Award, an Adobe Research Award, and the Richard King Mellon Presidential Fellowship in 2011. He is an alumnus of Columbia University, and a former research scientist intern of Yahoo! Labs, Microsoft Research Redmond, and University of Southern California. In addition to research, William enjoys writing scientific articles that impact the broader online community: his microblog @王威廉 has more than 100,000 followers and millions of monthly views. His work and opinions appear at major tech media outlets such as Wired, VICE, Fast Company, and Mental Floss.

640?wx_fmt=pngTitle: Exploiting and Reasoning With Open Knowledge Graph

Time:10:30-12:00

Abstract: While domain specific knowledge graphs are useful within specific domains, open knowledge graphs such as DBPedia, YAGO and Wikidata, have recently played instrumental roles in a number of applications. They can used as common sense knowledge for machine learning applications. They can also serve as reusable knowledge to complement domain specific knowledge graphs. In this tutorial, we will (1) introduce some well known open knowledge graphs, including DBPedia, YAGO and Wikidata, and their applications, and (2) survey on existing reasoning techniques for large scale open knowledge graphs. This tutorial is designed for a general semantic technology practitioner, whether from research or industry. Participants will only be expected to have basic knowledge of semantic technologies.

Bio:Prof Dr Jeff Z. Pan leads the Knowledge Technology group in the Department of Computing Science at University of Aberdeen. His research focuses primarily on knowledge representation, artificial intelligence and data science, in particular knowledge graph construction and maintenance, large-scale ontology reasoning, stream reasoning, question answering, and combining ontology reasoning with machine learning, as well as their applications. He is a key contributor of the W3C OWL (Web Ontology Language) standard. He leads the development of the award-wining TrOWL reasoner, the only ontology reasoner that Oracle Spatial and Graph (from v12) uses via the OWL-DBC database connection. He is an internationally leading expert on Knowledge Graph, being the Chief Editor of the first two books on Knowledge Graph, a new technology that is widely used by world leading IT companies. As the Chief Scientist and Coordinator of the EU Marie-Curie K-Drive project, he coordinated 22 Marie Curie Fellows on Knowledge Graph and Ontology research. He is an Associate Editor of the Journal of Web Semantics (JWS) and of the International Journal on Semantic Web and Information Systems (IJSWIS). He actively teams up with industrial collaborators on innovative research

T2

640?wx_fmt=pngTitle: Deep Learning for Natural Language Inference

Time:13:30-17:00

Abstract: Reasoning and inference are central to both human and artificial intelligence (AI). Modeling inference in natural language is notoriously challenging but is a basic problem towards true natural language understanding, as pointed out by MacCartney and Manning (2008), “a necessary (if not sufficient) condition for true natural language understanding is a mastery of open-domain natural language inference.” In this tutorial, I will introduce the state-of-the-art deep learning models for natural language inference (NLI). The tutorial will start with even more fundamental problems: semantic representation and composition, to lay the basis for the tutorial and our discussion. The tutorial will then focus not only on how deep learning models achieve the state-of-the-art performance but also on the limitations.

Bio: Xiaodan Zhu is an Assistant Professor of the Department of Electrical and Computer Engineering (ECE), Queen’s University, Canada. His research interests include Deep Learning, Natural Language Processing, Machine Learning, and Artificial Intelligence. Dr. Zhu received his Ph.D. from the Department of Computer Science at the University of Toronto in 2010 and his Master’s degree from the Department of Computer Science and Technology at Tsinghua University in 2000. Dr. Zhu is an Associate Editor of the Computational Intelligence journal. He also served on many academic committees, e.g., as the Publication Chair for COLING-2018, Area Chair of ACL-2018 and COLING-2018, and Steering Committee Member of SemEval-2018. Dr. Zhu is a panel member of Canada NSERC Discovery Grants (Computer Science; year 2017, 2020, 2021). He also served as an external reviewer for many government grants in Canada (e.g., NSERC), Singapore, and Hong Kong (e.g., GRF). Dr. Zhu also helps assess start-up companies’ proposals for seed-stage programs. In the past, he worked with top government research lab (e.g., NRC) and industrial research labs such as Google (New York), IBM T.J. Watson Research Center, and Intel China Research Center.

T3

640?wx_fmt=pngTitle:  Semantic Relation Extraction from Text

Time:8:30-12:00

Abstract: Every non-trivial text describes interactions and relations between people, institutions, activities, events and so on. What we know about the world consists in large part of such relations, and that knowledge contributes to the understanding of what texts refer to. Newly found relations can in turn become part of this knowledge that is stored for future use. To grasp a text’s semantic content, an automatic system must be able to recognize relations in texts and to reason about them. This may be done by applying and updating previously acquired knowledge. We focus here in particular on semantic relations that describe the interactions between nouns and compact noun phrases, and we present such relations from both a theoretical and a practical perspective. The theoretical exploration sketches the historical path that has brought us to the contemporary view and interpretation of semantic relations. We discuss a wide range of relation inventories proposed by linguists and by language processing people. Such inventories vary by domain, granularity and suitability for downstream applications. On the practical side, we investigate the recognition and acquisition of relations from texts. In a look at supervised learning methods, we present some of the available datasets, the variety of features that can describe relation instances, and some learning algorithms found appropriate for the task, including recent feature-less deep learning approaches. Next, we present weakly supervised and unsupervised learning methods of acquiring relations from large corpora with little or no previously annotated data. We show how enduring the bootstrapping algorithm based on seed examples or patterns has proved to be, and how it has been adapted to tackle Web-scale text collections. We also show a few machine learning techniques that can perform fast and reliable relation extraction by taking advantage of data redundancy and variability.

Bio: Dr. Preslav Nakov is a Senior Scientist at the Qatar Computing Research Institute, HBKU. His research interests include computational linguistics and natural language processing (for English, Arabic and other languages), question answering, fact-checking, machine translation, sentiment analysis, lexical semantics, Web as a corpus, and biomedical text processing. Preslav Nakov received a PhD degree in Computer Science from the University of California at Berkeley (supported by a Fulbright grant and a UC Berkeley fellowship), and an MSc degree from the Sofia University. He was a Research Fellow at the National University of Singapore (2008-2011), an honorary lecturer in the Sofia University (2008), research staff at the Bulgarian Academy of Sciences (2008), and a visiting researcher at the University of Southern California, Information Sciences Institute (2005). Preslav Nakov co-authored a Morgan & Claypool book on Semantic Relations between Nominals, two books on computer algorithms, and many research papers in top-tier conferences and journals. He received the Young Researcher Award at RANLP’2011. He was also the first to receive the Bulgarian President’s John Atanasoff award, named after the inventor of the first automatic electronic digital computer. Preslav Nakov is Secretary of ACL SIGLEX, the Special Interest Group (SIG) on the Lexicon of the Association for Computational Linguistics (ACL). He is also Secretary of SIGSLAV, the ACL SIG on Slavic Natural Language Processing. He is an Action Editor of the Transactions of the Association for Computational Linguistics (TACL) journal, a Member of the Editorial Board of the Journal of Natural Language Engineering, an Associate Editor of the AI Communications journal, and Editorial Board member of the Language Science Press Book Series on Phraseology and Multiword Expressions. He served on the program committees of the major conferences and workshops in Computational Linguistics, including as a co-organizer and as an area/publication/tutorial/shared task chair, Senior PC member, student faculty advisor, etc.; he co-chaired SemEval 2014-2016 and was an area co-chair of ACL, EMNLP, NAACL-HLT, and *SEM, a Senior PC member of IJCAI, and a shared task co-chair of IJCNLP.

 

T4

640?wx_fmt=pngTitle: 特定领域知识图谱的构建及应用案例

Time:13:30 – 15:00

Abstract: 本报告系统地介绍阿里巴巴知识图谱技术的发展。同时以商品知识图谱为例,介绍在商业领域垂直知识图谱构建和服务的实践。 包括1. 大规模知识建模、知识获取的技术和产品化思路。2. 垂直知识图谱在商业领域的应用案例和挑战。

Bio: 张伟是阿里巴巴知识图谱负责人,张伟博士毕业于新加坡国立大学,本科毕业于哈尔滨工业大学。现为阿里巴巴业务平台高级算法专家。曾任职新加坡资讯通信研究院自然语言处理应用实验室主任。研究领域:知识图谱、自然语言处理,机器学习等。

640?wx_fmt=pngTitle: 语义计算与知识问答技术在实际场景中的应用

Time:15:30-17:00

Abstract: 随着机器智能需求的不断增加,如何实现对自然语言的深度认知理解成为包括工业界和学术界的重点研究对象。本报告面向实际应用场景自然语言理解的两大主要任务,语义计算和知识问答展开深入介绍。在复杂多样的应用场景需求中,如何实现精准的语义计算,如何实现高效自动的知识构建,以及在此基础上的问答能力,是非常具有挑战的课题。在给出语义计算及知识问答技术的背景及进展的基础上,本报告将重点介绍相应技术在各垂直应用领域中的实际应用效果及可能存在的问题,以期为语义计算及问答领域的技术发展提供参考。

Bio:刘权是科大讯飞AI研究院语音交互研究主管,高级研究员,讯飞超脑常识推理研究负责人,国际常识知识推理会议Commonsense 2017学术委员会委员,博士毕业于中国科学技术大学电子工程与信息科学系、语音及语言信息处理国家工程实验室。在语义理解、常识推理、人机交互等领域开展了多项核心技术研究,作为第一作者在三大自然语言理解会议(ACL、EMNLP、NAACL)及IJCAI等国际顶级会议上发表多篇学术论文,曾获2013年全国语音通讯学术会议最佳学生论文奖,并作为主要负责人参与多项国家级与省部级科研攻关项目的技术研发工作。曾任加拿大约克大学计算机系访问学者。2016年,所设计的神经网络常识推理技术及系统,在美国纽约举办的国际新一轮认知智能评测Winograd Schema Challenge任务上取得冠军成绩。作为科大讯飞研究院语音交互研究主管,提出多项关键语义理解及问答技术,持续提升了科大讯飞AIUI平台语义能力。

 

学术讲习班主席

640?wx_fmt=jpeg陈华钧,浙江大学计算机科学与技术学院教授、博导。主要研究方向为语义网与知识图谱、大数据分析、生物医学信息等。OpenKG发起人,浙江省大数据智能计算重点实验室副主任、中国中文信息学会语言与知识计算专业委员会副主任、中国人工智能学会知识工程与分布智能专业委会副主任等。在IJCAI, WWW, AAAI/IAAI, ICDE, TKDE, Briefings in Bioinforamtics 等国际会议或期刊上发表多篇论文,并曾获国际语义网会议ISWC最佳论文奖。作为主要参与者,获得教育部技术发明一等奖、国家科技进步二等奖等奖励。
640?wx_fmt=pngXiaodan Zhu is an Assistant Professor of the Department of Electrical and Computer Engineering (ECE), Queen’s University, Canada. His research interests include Deep Learning, Natural Language Processing, Machine Learning, and Artificial Intelligence. Dr. Zhu received his Ph.D. from the Department of Computer Science at the University of Toronto in 2010 and his Master’s degree from the Department of Computer Science and Technology at Tsinghua University in 2000. Dr. Zhu is an Associate Editor of the Computational Intelligence journal. He also served on many academic committees, e.g., as the Publication Chair for COLING-2018, Area Chair of ACL-2018 and COLING-2018, and Steering Committee Member of SemEval-2018. Dr. Zhu is a panel member of Canada NSERC Discovery Grants (Computer Science; year 2017, 2020, 2021). He also served as an external reviewer for many government grants in Canada (e.g., NSERC), Singapore, and Hong Kong (e.g., GRF). Dr. Zhu also helps assess start-up companies’ proposals for seed-stage programs. In the past, he worked with top government research lab (e.g., NRC) and industrial research labs such as Google (New York), IBM T.J. Watson Research Center, and Intel China Research Center.

                           



OpenKG.CN


中文开放知识图谱(简称OpenKG.CN)旨在促进中文知识图谱数据的开放与互联,促进知识图谱和语义技术的普及和广泛应用。

640?wx_fmt=jpeg

点击阅读原文,进入 OpenKG 博客。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/480546.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Java必考题目之JVM面试题目和答案

JVM内存模型 首先我们来了解一下JVM的内存模型的怎么样的: 1.堆:存放对象实例,几乎所有的对象实例都在这里分配内存 堆得内存由-Xms指定,默认是物理内存的1/64;最大的内存由-Xmx指定,默认是物理内存的1/4…

听说读论文也有trick?这篇文章告诉你深度学习论文阅读最佳姿势

2020年的今天,我们的专业是deep learning,但是我们要keep learning,每天早上一睁眼,arxiv每天更新上百篇的论文,著名微博博主爱可可-爱生活保持也在推送最新的deep learning资讯和论文。我们不缺少计算机视觉论文&…

屏幕Screen类文件编写

注意&#xff1a; inline 定义的函数必须放在 .h 文件中&#xff0c;否则编译器报错&#xff01; 其次&#xff0c;注意写全称在 .h 里&#xff0c;如 std:: screen.h 头文件 #ifndef SCREEN_H #define SCREEN_H #include<string> #include<iostream>class Scre…

机器翻译自动评估-BLEU算法详解

机器翻译自动评估-BLEU算法详解 版权声明&#xff1a;本文为博主原创文章&#xff0c;未经博主允许不得转载。 https://blog.csdn.net/qq_31584157/article/details/77709454 </div><link rel"stylesheet" href"https://csdnimg.cn/re…

论文浅尝 | 利用Lattice LSTM的最优中文命名实体识别方法

本文转载自公众号&#xff1a;机器之心。选自arXiv作者&#xff1a;Yue Zhang、Jie Yang机器之心编译参与&#xff1a;路、王淑婷近日&#xff0c;来自新加坡科技设计大学的研究者在 arXiv 上发布了一篇论文&#xff0c;介绍了一种新型中文命名实体识别方法&#xff0c;该方法利…

最全MySQL面试60题和答案

Mysql中有哪几种锁&#xff1f; 1.表级锁&#xff1a;开销小&#xff0c;加锁快&#xff1b;不会出现死锁&#xff1b;锁定粒度大&#xff0c;发生锁冲突的概率最高&#xff0c;并发度最低。 2.行级锁&#xff1a;开销大&#xff0c;加锁慢&#xff1b;会出现死锁&#xff1b;…

中文幽默语料库构建与计算项目(幽默等级识别,幽默类型识别,隐喻类型识别,隐喻情绪识别)

ChineseHumorSentiment chinese Humor Detection or Computation based on corpus and nlp methods, 基于语料库与NLP方法的中文幽默计算与检测项目 项目地址:https://github.com/liuhuanyong/ChineseHumorSentiment 项目介绍 幽默多指令人发笑的品质或者具有发笑的能力&…

账户Account类文件编写(static成员使用)

static类成员是该类所有成员共享一份的数据&#xff0c;一处修改了&#xff0c;全部变更&#xff1b; static成员函数只能调用static成员数据&#xff1b; static const整形int&#xff0c;char&#xff0c;可以在类内声明和初始化&#xff0c;类外不必再声明&#xff08;跟编译…

百度机器阅读理解比赛赛后总结

百度机器阅读理解比赛赛后总结 <!-- 文章内容 --><div data-note-content"" class"show-content"><div class"show-content-free"><p>2018年4-5月间&#xff0c;笔者参加了百度举办的<a href"https://links.jia…

课程 | 《知识图谱》第二期重磅来袭!

参团&#xff0c;咨询&#xff0c;查看课程&#xff0c;请点击【阅读原文】↓↓

All in Linux:一个算法工程师的IDE断奶之路

一只小狐狸带你解锁 炼丹术&NLP 秘籍在合格的炼丹师面前&#xff0c;python可能被各种嫌弃前不久卖萌屋的lulu写了一篇vim的分享《算法工程师的效率神器——vim篇》&#xff0c;突然想起来自己也有一篇攒了几年灰的稿子&#xff0c;在小伙伴的怂恿下跟小夕强行翻新了一下&a…

2019 阿里Java 4轮面试题,含必考题答案参考!

Java一面 hashmap源码问题 HashMap底层结构 put操作讲一下 HashMap、HashMap如何保证线程安全、ConcurrentHashMap JVM有哪些回收算法&#xff0c;对应的收集器有哪些&#xff1f; jvm g1的内存模型讲一下&#xff0c;G1和CMS收集器的区别&#xff1f;以及G1收集器对CMS的改…

技术动态 | 自底向上构建知识图谱全过程

本文转载自公众号&#xff1a;阿里技术。“The world is not made of strings , but is made of things.”——辛格博士&#xff0c;from Google.知识图谱&#xff0c;是结构化的语义知识库&#xff0c;用于迅速描述物理世界中的概念及其相互关系&#xff0c;通过将数据粒度从d…

数据结构--单链表single linked list数据结构C++实现

2018年2月开始学习的 C Primer&#xff0c;到今天2019年3月已经整整一年了&#xff0c;非常感谢在一起交流的小伙伴&#xff0c;是你们的无私帮助和分享使得我能跨越很多技术的坑&#xff0c;感谢你们&#xff01;期待我们2019年一起拿下《数据结构与算法》以及Python入门。 …

搜索中的 Query 理解及应用

本文转载自公众号“夕小瑶的卖萌屋”&#xff0c;专业带逛互联网算法圈的神操作 -----》我是传送门 关注后&#xff0c;回复以下口令&#xff1a; 回复【789】 &#xff1a;领取深度学习全栈手册&#xff08;含NLP、CV海量综述、必刷论文解读&#xff09; 回复【入群】&#xf…

GAN原理,优缺点、应用总结

<h1 class"csdn_top" id"gan原理优缺点应用总结"><a name"t0"></a>GAN原理&#xff0c;优缺点、应用总结</h1> <br> 本文已投稿至微信公众号–机器学习算法全栈工程师&#xff0c;欢迎关注</article><p&…

java程序员进阶必读书单

以下是我推荐给Java开发者们的一些值得一看的好书&#xff0c;从java基础开始到高级&#xff0c;以及从高级进阶到架构等的书单。 基本都是经典之作&#xff0c;可以利用工作闲暇的时间&#xff0c;系统阅读。 本文作者&#xff0c;优知学院创始人 陈睿 优知学院是IT人在线进…

胡伟 | 面向多实体人机协作消解的对比表生成自动化方法

众包实体消解实体消解&#xff08;Entity Resolution&#xff0c;简称ER&#xff09;旨在发现不同知识图谱中指称真实世界相同对象的实体。众包实体消解&#xff08;Crowd ER&#xff09;在使用机器的基础上&#xff0c;还使用人来完成实体消解任务。众包实体消解的一个常见流程…

poj 1250 解题(链表法)

http://poj.org/problem?id1250 题意大意 住宿床位有限&#xff0c;按顺序入住&#xff0c;用ABC等代表单个人&#xff0c;第1次出现代表入住&#xff0c;第2次出现代表离开 输入&#xff1a; 1 ABCBCA 代表有1个床位&#xff0c; A入住&#xff0c; B入住&#xff0c;入住…

怎样高效阅读一份深度学习项目代码?

犹豫很久要不要把读代码这个事情专门挑出来写成一篇推文。毕竟读代码嘛&#xff0c;大家可能都会读。而且笔者个人读的和写的代码量也并不足以到指导大家读代码的程度。但笔者还是决定大胆地写一点&#xff1a;就当是给自己设立今后读代码的标准&#xff0c;也将一些之前未能践…