微软亚洲研究院与微软总部联合推出的“星跃计划”科研合作项目邀请你来报名!本次“星跃计划”报名再次新增了来自微软 E+D (Experiences + Devices) Applied Research 全球总部的新项目,欢迎大家关注与申请!还在等什么?加入“星跃计划”,和我们一起跨越重洋,探索科研的更多可能!
该计划旨在为优秀人才创造与微软全球总部的研究团队一起聚焦真实前沿问题的机会。你将在国际化的科研环境中、在多元包容的科研氛围中、在顶尖研究员的指导下,做有影响力的研究!
目前还在招募的跨研究院联合科研项目覆盖智能推荐、图像缩放、计算机视觉、行为检测、社会计算、智能云等领域。研究项目如下:Online Aesthetic-Aware Smart Image Resizing, UserBERT: Pretrain User Models for Recommendation, Visual representation learning by vision-language tasks and its applications, DNN-based Detection of Abnormal User Behaviors, Reinforcing Pretrained Language Models for Generating Attractive Text Advertisements。星跃计划开放项目将持续更新,请及时关注获取最新动态!
(文末还有集赞赠礼福利,不要错过!)
星跃亮点
同时在微软亚洲研究院、微软全球总部顶级研究员的指导下进行科研工作,与不同研究背景的科研人员深度交流
聚焦来自于工业界的真实前沿问题,致力于做出对学术及产业界有影响力的成果
通过线下与线上的交流合作,在微软了解国际化、开放的科研氛围,及多元与包容的文化
申请资格
本科、硕士、博士在读学生;延期(deferred)或间隔年(gap year)学生
可全职在国内工作6-12个月
各项目详细要求详见下方项目介绍
▼
还在等什么?
快来寻找适合你的项目吧!
Online Aesthetic-Aware Smart Image Resizing
点击此处向上滑动阅览
For the new Designer app and Designer in Edge, we need to resize templates to different sizes, since different social media platforms require different target dimensions of the media, e.g., Facebook Timeline Post for personal accounts and business pages (1200 x 628), LinkedIn timeline post (1200 x 1200), Twitter timeline post (1600 x 900), etc. Image is the center of a template design. We need an ML-powered technique to automatically resize (including aspect ratio change, crop, zoom in/out) an image and put it into a resized template (more specifically speaking, resized image placeholder) for the target platform, so that the image placement looks good (i.e., maintaining the aesthetic values).
Research Areas
Computer Vision and Machine Learning
Qualifications
Ph.D. students majoring in computer science, applied mathematics, electrical engineering or related technical discipline
Relevant experience in the development and application of computer vision and/or machine learning algorithms to solve challenging image understanding problems
Strong scientific programming skills, including C/C++, MATLAB, Python
Independent analytical problem-solving skills
Experience collaborating within research teams to develop advanced research concepts, prototypes, and systems
Strong communication skills
UserBERT: Pretrain User Models for Recommendation
点击此处向上滑动阅览
Pretrained language models such as BERT and UniLM have achieved huge success in many natural language processing scenarios. In many recommendation scenarios such as news recommendation, video recommendation, and ads CTR/CVR prediction, user models are very important to infer user interest and intent from user behaviors. Previously, user models are trained in a supervised task-specific way, which cannot achieve a global and universal understanding of users and may limit they capacities in serving personalized applications.
In this project, inspired by the success of pretrained language models, we plan to pretrain universal user models from large-scale unlabeled user behaviors using self-supervision tasks. The pretrained user models aim to better understand the characteristics, interest and intent of users, and can empower different downstream recommendation tasks by finetuning on their labeled data. Our recent work can be found at https://scholar.google.co.jp/citations?hl=zh-CN&user=0SZVO0sAAAAJ&view_op=list_works&sortby=pubdate.
Research Areas
Recommender Systems and Natural Language Processing
Qualifications
Ph.D. students majoring in computer science, electronic engineering, or related areas
Self-motivated and passionate in research
Solid coding skills
Experienced in Recommender Systems and Natural Language Processing
Visual representation learning by vision-language tasks and its applications
点击此处向上滑动阅览
Learning visual representation by vision-language pair data has shown highly competitive compared to previous supervised and self-supervised approaches, pioneered by CLIP and DALL-E. Such vision-language learning approaches have also demonstrated strong performance on some pure vision and vision-language applications. The aim of this project is to continually push forward the boundary of this research direction.
Research Areas
Computer vision
https://www.microsoft.com/en-us/research/group/visual-computing/
https://www.microsoft.com/en-us/research/people/hanhu/
Qualifications
Currently enrolled oversea Ph. D. students with promised or deferred offer, and is now staying in China
Major in computer vision, natural language processing, or machine learning
DNN-based Detection of
Abnormal User Behaviors
点击此处向上滑动阅览
Are you excited to apply deep neural networks to solve practical problems? Would you like to help secure enterprise computer systems and users across the globe? Cyber-attacks on enterprises are proliferating and oftentimes causing damage to essential business operations. Adversaries may steal credentials of valid users and use their accounts to conduct malicious activities, which abruptly deviate from valid user behavior. We aim to prevent such attacks by detecting abrupt user behavior changes.
In this project, you will leverage deep neural networks to model behaviors of a large number of users, detect abrupt behavior changes of individual users, and determine if changed behaviors are malicious or not. You will be part of a joint initiative between Microsoft Research and the Microsoft Defender for Endpoint (MDE). During your internship, you will get to collaborate with some of the world’s best researchers in security and machine learning.
You would be expected to:
Closely work with researchers in China and Israel towards the research goals of the project.
Develop and implement research ideas and conduct experiments to validate them.
Report and present findings.
Microsoft is an equal opportunity employer.
Research Areas
Software Analytics, MSR Asia
https://www.microsoft.com/en-us/research/group/software-analytics/
Microsoft Defender for Endpoint (MDE)
This is a Microsoft engineering and research group that develops the Microsoft Defender for Endpoint, an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats
https://www.microsoft.com/en-us/security/business/threat-protection/endpoint-defender
Qualifications
Must have at least 1 year of experience applying machine learning/deep learning to real world/ research problems
Demonstrated hands on the experience with Python through previous projects
Familiarity with Deep Learning frameworks like PyTorch, Tensorflow, etc
Keen ability for attention to detail and a strong analytical mindset
Excellent in English reading and reasonably good in English communications
Advisor’s permission
Those with the following conditions are preferred:
Prior experience in behavior modeling
Prior experience in anomaly detection
Security knowledge a plus
Reinforcing Pretrained Language Models for Generating Attractive Text Advertisements
点击此处向上滑动阅览
While PLMs have been widely used to generate high-quality texts in a supervised manner (by imitating texts written by humans), they lack a mechanism for generating texts that directly optimize a given reward, e.g., given user feedback like user clicks or a criterion that cannot be directly optimized by using gradient descent. In real-world applications, we usually wish to achieve more than just imitating existing texts. For example, we may wish to generate more attractive texts that lead to increased user clicks, more diversified texts to improve user experience, and more personalized texts that are better tailored to user tastes. Combing RL with PLMs provides a unified solution for all these scenarios, and is the core for machines to achieve human parity in text generation. Such a method has the potential to be applied in a wide range of products, e.g., Microsoft Advertising (text ad generation), Microsoft News (news headline generation), and Microsoft Stores and Xbox (optimizing the description for recommended items).
In this project, we aim to study how pretrained language models (PLMs) can be enhanced by using deep reinforcement learning (RL) to generate attractive and high-quality text ads. While finetuning PLMs have been shown to be able to generate high-quality texts, RL additionally provides a principled way to directly optimize user feedback (e.g., user clicks) for improving attractiveness. Our initial RL method UMPG is deployed in Dynamic Search Ads and published in KDD 2021. We wish to extend the method so that it can work for all pretrained language models (in addition to UNILM) and study how the technique can benefit other important Microsoft Advertising products and international markets.
Research Areas
Social Computing (SC), MSR Asia
https://www.microsoft.com/en-us/research/group/social-computing-beijing/
Microsoft Advertising, Microsoft Redmond
Qualifications
Ph.D. students majoring in computer science, electrical engineering, or equivalent areas
Experience with deep NLP and Transformers a strong plus
Background knowledge of language model pre-training and/or reinforcement learning
Capable of system implementing based on academic papers in English
Those with the following conditions are preferred:
Good English reading and writing ability and communication skills, capable of writing English papers and documents
Active on GitHub, used or participated in well-known open source projects
申请方式
符合条件的申请者请填写下方申请表:
https://jinshuju.net/f/LadoJK
或扫描下方二维码,立即填写进入申请!
特别福利!
转发本推送至朋友圈集赞 20 个,截图发送至“微软学术合作”微信公众号后台。前五名成功集赞的读者将获赠微软定制帆布包一个!
(入选后工作人员将通过微信公众号后台与您联系,请注意查看消息。)