To post new jobs, submit this form.
AirAsia Berhad: Asia’s leading airline was established with the dream of making flying possible for everyone. Since 2001, AirAsia has swiftly broken travel norms around the globe and has risen to become the World’s Best Low-cost Airline by Skytrax, for 11 consecutive years. Driven by the Dare to Dream spirit, we pride ourselves in being the region’s largest low-cost carrier, serving 25 countries and over 140 destinations. As we embrace new technology to become a digital travel company, we seek highly talented individuals to join us on our mission to make AirAsia a part of everyone’s travel and lifestyle. This position reports into the Data Science Centre of Excellence. The Data Science Centre of Excellence (CoE) comes under the Digital & Technology Group which is responsible for spearheading digital transformation across AirAsia. The CoE works on business and operations problems across all entities in the AirAsia Group. Key problems we solve include improving revenue and reducing costs through large-scale data federation, predictive and prescriptive analytics, state-of-the-art machine/deep learning, intelligent scheduling and optimization, and other advanced techniques. Notes: Applications without a proper curriculum vitae will not be considered. Fresh graduates without relevant coursework and project work will not be considered. A MS or PhD is strongly preferred for Senior, Lead/Manager, Principle/Senior-Manager roles. Applicants with experience using Google Cloud Platform are highly favored. Only shortlisted candidates will be notified. Experience Experience with common data science toolkits, programming languages, visualisation tools and SQL/NoSQL databases. Good applied statistical knowledge with emphasis in business and finance related statistical distributions, statistical testing, modeling, regression analysis, etc. Experience with distributed computing platforms and open-source tools and libraries. Familiar or prone to adopt design thinking methods. Able to operate under pressure and change, and balance among speed, reliability, interpretability. Good working knowledge of productivity tools such as G Suite, Git, Jira, Confluence. Experience with code versioning, code review and documentation. Experience in one or more of the following specialized areas: Machine Learning Understanding of machine learning algorithms such as k-NN, Naive Bayes, SVM, Decision trees. Experience using ML frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience using Google Cloud Platform products and services. Algorithm Engineering Strong ability to implement, improve, and deploy ML and Math models in Golang or Python. Conduct systems tests for security, performance, and availability. Develop and maintain the design and troubleshooting/error documentation. Qualifications BS/MS/PhD in a Mathematics, Science or Engineering disciplines. Up to 4 yrs relevant experience beyond first degree.
Link | Contact: Xie Chao | Posted on: 2019-10-21 03:11:28 UTC
Qualcomm's Multimedia R&D and Standards Group is seeking candidates for its Computer Vision, Extended Reality and Imaging R&D teams. As the research organization of Qualcomm Technologies Inc., the group develops new algorithms and systems of computer vision, virtual reality, augmented reality and 3D imaging technologies for embedded system running in smartphones, auto and IOT devices. We are seeking candidates with strong knowledge and hands-on experience in visual signal processing, machine learning, imaging, 3D reconstruction, body and hand skeleton pose estimation. We are also seeking candidates with engineering experience in acceleration of embedded system with CPU, DSP, GPU and NPU. Candidates at different levels of experience will be considered. PHD Degree preferred or a Minimum of MS with 3+ years industry experience required in one or more of the following areas: Deep learning research of computer vision tasks Digital signal processing Computer vision algorithm Virtual reality and augment reality Imaging technology Experience in embedded and real-time system implementation Strong research and problem-solving skills and keep up with the latest findings and trends in the domain of computer vision, imaging, camera, augmented reality and virtual reality C, C++, Matlab and Python programming Sr. Engineer - Computer Vision Sr. Engineer – Extended Reality Sr. Engineer – Imaging Technology Additional skills in the following areas are a plus: Digital camera system and depth aware camera Sensors and sensor fusion 6DoF Video processing Computer architecture and real-time operating system Software design and development in embedded system Hardware design and implementation in ASIC
Qualcomm is a company of inventors that unlocked 5G ushering in an age of wireless intelligence and new possibilities that will transform industries, create jobs, and enrich lives. With 5G and other wireless connectivity, we bring the content, control, and intelligence closer to the end-user to complement the cloud. Qualcomm AI Research is looking for world-class researchers in machine learning and deep learning. Come join a high-caliber team of engineers building advanced machine learning technology, best-in-class solutions, and friendly SW optimization tools to enable state-of-the-art networks to run on devices with limited power, memory, and computation. Led by world-renowned pioneering machine learning researcher, Max Welling, members of our team enjoy the opportunity to participate in cutting edge research while simultaneously contributing technology that will be deployed worldwide in our industry-leading devices. You will be part of a multi-disciplinary team that has repeatedly won the major deep learning competitions, such as the ImageNet large scale visual recognition challenge and visual wake word challenge. Collaborate in a cross-functional environment spanning hardware, software and systems. See your design in action on industry-leading chips embedded in the next generation of smartphones, autonomous vehicles, robotics, and IOT devices. The R&D work responsibility can include the development of new fundamental methods in the following areas Conducts fundamental machine learning research to create new models or new training methods in various technology areas, e.g. deep generative models, Bayesian deep learning, equivariant CNNs, adversarial learning, active learning, Bayesian optimizations, reinforcement learning, unsupervised learning, and graph NNs. Drives systems innovations for model efficiency advancement on device as well as in the cloud. This includes auto-ML methods (model-based, sampling based, back-propagation based) for model compression, quantization, architecture search, and kernel/graph compiler/scheduling with or without systems-hardware co-design. Performs advanced platform research to enable new machine learning compute paradigm, e.g., compute in memory, on-device learning/training, edge-cloud distributed/federated learning, and quantum machine learning. Creates new machine learning models for advanced uses cases and achieve state-of-the-art performance and beyond. The use cases can broadly include audio, speech, image, video, power management, wireless, graphics, and chip design.
We are a group of entrepreneurs looking to revolutionize the media industry. We aim to disrupt how content is both created and delivered by using machine learning techniques and novel computing architectures. We are looking for passionate, talented, and creative engineers/interns with strong machine learning background to help build world-leading AI platforms and tools for media creation. As part of our team, you will have a voice at the table helping define strategic priorities and work alongside the founders to develop novel approaches to entertainment and computer vision techniques to advance media. Upon successful collaboration, there is an opportunity to join as founder with equity.
Link | Contact: Tony | Posted on: 2019-10-21 03:09:58 UTC
Samsung Semiconductor, Inc. in San Diego is searching for research engineers at all levels. Candidate will work as part of a team on research and development of algorithms and theory of machine learning. Candidate will research fundamental theoretical aspects of deep learning as well as develop novel techniques to advance the theory and practice of deep learning. Candidate can also apply the developed theory and algorithms to advance the performance of multimedia applications, such as computer vision, augmented reality, natural language processing, or speech processing. Understand state of the art machine/deep learning concepts, theory, and applications. Research algorithms and theory of learning. Develop machine/deep learning algorithms for mobile processors. Develop simulators and analyze the performance. Produce key intellectual property for machine/deep learning. See Link for Full Job Description & Requirements.
Samsung Semiconductor, Inc. in San Diego is searching for systems engineers and computer scientists for research and algorithm development. Candidate will work as part of a team on the research, system design, and implementation of algorithms for application processors and multimedia processors. Candidate will conduct research in deep machine learning techniques and find solutions for computer vision, image and video processing. Candidate is preferred to have goods hands-on experience with machine learning and deep learning algorithm development. See Link for full description & requirements.
The Remote Sensing Image Analysis (RSiM) group at the Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin, Germany is looking for highly motivated PhD candidates. The research of the PhD candidates will aim at developing innovative machine learning techniques (with a special focus on deep learning) for the analysis of big data from space. The main topics include: 1) developing deep neural network models that can overcome the data imbalance problems for satellite image classification; and 2) developing active learning methods that are applicable to the designed deep neural networks. The successful candidates will begin on January 2020 and will have a duration of 3 years. MSc degree is required in computer engineering or computer science with experience in computer vision, deep learning for image understanding. Very good command of German and English is required.
Link | Contact: Begüm Demir | Posted on: 2019-10-21 03:07:53 UTC
We are looking for an intern to join our computer vision team at Mapillary. You will be working on an R&D project to optimize Mapillary’s state-of-the-art object recognition algorithms for mobile platforms. You will be working with a team of engineers and researchers with extensive experience in computer vision, deep learning, and mobile app development. Your goal is to advance the state-of-the-art on low-cost and real-time object recognition and to collaborate with our mobile app team to develop a proof-of-concept for an on-device processing application on iOS. The result of the project can additionally lead to a publication in a high-impact computer vision conference. If you want to know more visit our website or talk to us at booth E-29.
We are looking for talented PhD students and PostDocs in Helsinki, Vaasa and Tampere. Applicants are expected to have a background that encompasses both (engineering) mathematics and programming, e.g. in Python and/or MATLAB. Previous experience on computer vision, (convolutional) neural networks and scientific publishing are valued. These positions concern autonomous perceiving and understanding of the environment, as well as image-based localization. These constitute key challenges for any autonomous or augmented reality system, especially when these functions are performed under low-quality sensor data. The research work combines machine learning, computer vision and computer engineering to develop new methods for data-driven 3D visual computing and image based localization, while considering the computational resource limitations of autonomous systems.
To feel truly immersed in virtual reality, one needs to be able to freely look around within a virtual environment and see it from the viewpoints of one’s own eyes. Full immersion requires that viewers see the correct views of an environment at all times. As viewers move their heads, the objects they see should move relative to each other, with different speeds depending on their distance to the viewer. This is called motion parallax and is a vital depth cue for the human visual system that is entirely missing from existing 360° VR video. The goal of this project is to capture the real world and recreate its appearance for new, previously unseen views, to enable more immersive virtual reality video experiences. To do this, the project aims to develop novel-view synthesis techniques using deep learning (like Flynn et al., 2019) that are capable of producing high-quality, temporally-coherent, time-varying VR video of dynamic real-world environments from one or more standard or 360-degree video cameras. Particularly important are the convincing reconstruction of visual dynamics, such as moving people, cars and trees. This experience will provide improved motion parallax and depth perception to the viewer (like Bertel et al., 2019) to ensure unparalleled realism and immersion. Funding Notes UK and EU candidates applying for this project will be considered for a University Research Studentship which will cover UK/EU tuition fees, a training support fee of £1,000 per annum and a tax-free maintenance allowance at the UKRI Doctoral Stipend rate (£15,009 in 2019-20) for a period of up to 3.5 years. References * Bertel, Campbell and Richardt, “MegaParallax: Casual 360° Panoramas with Motion Parallax”, IEEE TVCG 2019 * Flynn, Broxton, Debevec, DuVall, Fyffe, Overbeck, Snavely and Tucker, “DeepView: View Synthesis With Learned Gradient Descent”. CVPR 2019
The Imaging and Computer Vision team of Siemens Healthineers has an immediate opening in Princeton, NJ for a research intern with a focus on Computer Vision and Deep Learning. Our Princeton facility is recognized for providing a stimulating environment for highly talented and self-motivated researchers. You will have the opportunity to test your knowledge in a challenging problem-solving environment. You will be encouraged to think out-of-the-box, innovate and find solutions to real-life problems. Our team has a strong publication record in leading journals and conferences. What are my responsibilities? • Contribute to research projects that focus on anonymization. • Advance the state-of-the-art in the field and publish the results in top journal/conference. • Fast prototyping, feasibility studies, specification and implementation. We are looking for both PhD and master level students
Link | Contact: Venkatesh N. Murthy | Posted on: 2019-10-21 03:06:02 UTC
**Title**: “Robust 3D Hand-Object Interaction” **Position**: Doctoral research position (PhD) at MPI (https://ps.is.tuebingen.mpg.de) and INRIA (http://thoth.inrialpes.fr). For an analytic description please visit: https://ps.is.tuebingen.mpg.de/jobs/phd-student-doktorand **Project**: Hands are important to humans for interacting with the physical world. In this project we use computer vision to jointly reason about interacting hands and manipulated objects. We aim to create a new generation of datasets, benchmarks and tools, using 2D/3D vision, robotics and machine learning techniques, that will automate and revolutionize the capture, analysis, control and our general understanding of hands physically interacting with objects. For related work please visit: https://ps.is.tuebingen.mpg.de/research_projects/hands-in-action **Advising & Location**: The student will be advised by Michael Black (https://ps.is.mpg.de/~black) and Cordelia Schmid (https://thoth.inrialpe.fr/people/schmid) and will collaborate closely with Dimitrios Tzionas (https://ps.is.mpg.de/~dtzionas). The main location will be Tübingen (Germany) with possible visits to Grenoble (France). **Candidate**: We seek candidates interested in computer vision, graphics, robotics and machine learning. The successful candidate will have a Master’s degree (or equivalent) in Computer Science (or related) and skills in: written and oral communication in English, Python/C++ programming, working both independently and in a team. Demonstrated experience in any of the following will be appreciated: computer vision/graphics, AR/VR, 3D simulation/game/physics engines, 3D geometry processing, numerical optimization (e.g. ceres), neural networks (e.g. PyTorch, TF), robotics (control, path planning), CUDA, GPU parallelization, etc. Prior publications, internships, industrial experience, participation in open-source projects or competitions are a plus. The PhD student (m/f/d) will receive a PhD funding contract equivalent in remuneration to pay group E13, 65% of the Collective Wage Agreement for the Public Service. An initial contract will be given for 3 years with possibility of 1 - year extension. **Application**: Only possible through IMPRS portal (https://imprs.is.mpg.de/application). Please mention Michael Black, Cordelia Schmid and Dimitrios Tzionas in your application. The deadline is November 6th 2019 11:59am CET. The Max Planck Society is committed to increasing the number of individuals with disabilities in its workforce and therefore encourages applications from such qualified individuals. The Max Planck Society seeks to increase the number of women in those areas where they are underrepresented and therefore explicitly encourages women to apply.
We are looking for a Computer Vision / Machine Learning Researcher, who will investigate the problem of sensor fusion (from multiple industrial-type RGB cameras, stereo-images, LiDAR, RADAR, etc.) using deep learning. The successful applicant will take ownership in developing, communicating and exchanging research insights within an academic and industrial consortium. The goal is to generate and publish ideas in top-level computer vision and machine learning conferences (CVPR, ICCV, NeurIPS, etc.). If you want to know more visit our website or talk to us at booth E-29.
It takes powerful technology to connect our brands and partners with an audience of 1 billion. Nearly half of Verizon Media employees are building the code and platforms that help us achieve that. Whether you’re looking to write mobile app code, engineer the servers behind our massive ad tech stacks, or develop algorithms to help us process 4 trillion data points a day, what you do here will have a huge impact on our business—and the world. Want in? As Verizon’s media unit, our brands like Yahoo, TechCrunch and HuffPost help people stay informed and entertained, communicate and transact, while creating new ways for advertisers and partners to connect. With technologies like XR, AI, machine-learning, and 5G, we’re transforming media for tomorrow, too. We're creators and coders, dreamers and doers creating what's next in content, advertising and technology. About Video Intelligence Platform The Visual-Intelligence group is building new Machine-learning, AI-based platforms, services and consumer experiences that reach hundreds of millions of people every day. Our products range from image and video understanding using computer vision, video stream ranking and recommendations to building new Augmented-Reality based experiences and streaming live entertainment and sports to consumer facing apps. Our team consists of innovative and enthusiastic scientists and engineers looking to revolutionize video streaming. We’re looking for product-minded researchers and engineers who can help drive new features from idea through to launch. Position Responsibilities Develop and support visual content moderation models for image and video Design and develop light-weight and high efficiency vision models that would run on the edge and on mobile-devices Design and carry out both fundamental and applied research in large-scale image and video understanding. Work closely with engineers to develop prototypes and transfer research to new products, new processes, and/or new business areas. Actively participate in the academic community and publish high-quality research. Position Impact This position will be responsible for the research and the development of some of our core machine-learning and computer-vision models that would run on Verizon Media's content as well as Verizon's 5G/MEC use cases. Computer vision has been called out as a strategic investment area for Verizon for creating differentiating business opportunities that leverage 5G and running on the MEC. Position Opportunity This position will grow as a subject matter expert owning all moderation research and models, along with specific areas of expertise for models that serve Verizon customers using the MEC, which will depend on the candidate expertise. Position Requirements Ph.D. or MS.C in Computer Vision, Multimedia, Deep Learning, Machine Learning, AI or a related field. 5+ years of research or research engineering experience in computer vision, multimedia, deep learning, machine learning, and AI methodology. Publications in top-tier conferences and journals in related fields (e.g., CVPR, ECCV, ICCV, NIPS, ICML, ICLR, etc.) a plus. Strong algorithmic problem solving and software development skills (C/C++, Python, Java, etc.). Excellent communication & writing skills Verizon Media is proud to be an equal opportunity workplace. All qualified applicants will receive consideration for employment without regard to, and will not be discriminated against based on age, race, gender, color, religion, national origin, sexual orientation, gender identity, veteran status, disability or any other protected category. Verizon Media is dedicated to providing an accessible environment for all candidates during the application process and for employees during their employment. If you need accessibility assistance and/or a reasonable accommodation due to a disability, please submit a request via the Accommodation Request Form (https://www.verizonmedia.com/careers/contact-us.html) or call 408-336-1409. Requests and calls received for non-disability related issues, such as following up on an application, will not receive a response.
A unique opportunity to be part of an exciting technological challenge. The Abbas Research Lab and Rui Research Lab at the University of Minnesota are looking for a talented postdoctoral research associate to join their teams starting Fall 2019 to work on the Phase I Egg-Tech Challenge, an international competition that may lead to Phase II Egg-Tech Prize focused on technology transfer and commercialization. The postdoc must have a PhD degree in Computer Science and Engineering, Electrical Engineering, Data Analytics or related fields. Experience in machine learning is required. Additional experience in machine vision or 2D/3D imaging systems is preferred but not required. The successful candidates will serve an initial one-year term, with a possible second-year renewal based on the qualifications and performance. Review of applications will begin on November 10, 2019, and will continue until the position is filled. Required Qualifications • Ph.D. in Computer Science and Engineering, Electrical Engineering, Data Analytics, or related fields • Experience in machine learning/deep learning • Excellent oral and written communications skills Preferred Qualifications • Experience in machine vision or 2D and 3D imaging systems. • The combination of skills in electric engineering particularly 2D/3D imaging systems and machine learning is preferred.
Mapillary currently uses Structure from Motion to register multiple images of the same area together effectively fusing information from their GPS positions and reducing the location uncertainty. To improve the global positioning further, Aerial images can be used as an additional source of global positioning if one can register the ground images to them. Registering ground and aerial images is challenging because of the large change of perspective e.g. while we see building facades from the ground, we only see the roofs from the sky. In this project, we will explore techniques based on deep learning and semantics to register ground and aerial images. These can include end-to-end methods that provide the registration given the images, and also methods exploiting the geometry and semantics of the scene to find correspondences between the two views. If you want to know more visit our website or talk to us at booth E-29.
**The company Azevtec (Autonomous, Zero-Emission Vehicle Technologies) is automating 'first-mile' trucking. We simplify how large enterprises ship goods—our robotic systems automate repetitive vehicle movement and manual tasks in the world’s busiest freight-shipping hubs to enhance operational performance, reduce costs, and improve safety. Azevtec is a rapidly growing, Series-A company founded to drive the adoption of sustainable transportation and deploy autonomous vehicles responsibly. **The role: We’re searching for a talented C++ / Python software developer with experience in machine learning and/or geometric methods for computer vision. You will be responsible for creating perception approaches for classifying obstacle types (vehicle, truck, pedestrian etc) and determining the pose of objects. Building robust approaches in inclement weather. Taking responsibility for the full software engineering lifecycle: requirements, design, source code implementation, unit test, integration, and system test. **Required qualifications Bachelor’s degree in computer science and/or electrical/electronics engineering C++ expertise – either professionally or via academic coursework ROS / software for ground robotic systems Sensor processing/perception Experience working on a team in a Linux environment and targeting embedded deployment Excellent written and verbal communication skills Exceptional analytical skills Demonstrated strong leadership and people skills Sterling references **Ideal qualifications Experience with robotics software engineering, autonomous vehicle systems, computer vision, machine learning, and/or planning and controls Familiarity with, and/or prior use of: Graphical user interface (GUI) toolkits for implementation of engineering interfaces and tools Visualization/graphics frameworks, libraries, and techniques to support engineering tools FOSS libraries/frameworks such as OpenCV, the Point Cloud Library (PCL), and similar packages FOSS tools supporting software engineering, such as CMake, continuous integration packages, the Google test framework and others Python expertise – either professionally or via academic coursework Prior use of Git for software version control Experience with sockets (TCP, UDP) programming
We invite candidates to apply for Researcher/Senior Researcher/Principal Researcher/Intern Researcher positions at Wormpex AI Research. Based in Bellevue, Great Seattle area USA, Wormpex AI Research, led by Chief Scientist Dr. Gang Hua, is the research branch of a fast growing convenient store chain in Asia backed by a global capital. At Wormpex AI research, we build state-of-the-art AI technologies to facilitate new retail logistics from storefronts, warehouses to manufacture. You will be offered a rare opportunity to explore a variety of research domains, e.g. Computer Vision, Machine Learning, Deep Learning, Robotics, Operations Research, Graphics, HCI or a combination of the above. You will have the opportunities to deliver advanced AI technologies to different stage of our retail operation process, so as to empower the next generation retail business operation and shopping experiences. You will also conduct exploratory research to secure future opportunities, and publish your research work in top venues. We offer competitive package and benefits, and sponsoring legal work visa and immigration applications for eligible employees.
The departments of Pediatrics and Radiology at Boston Children’s Hospital and Harvard Medical School invite applicants for the open positions at the postdoctoral research fellow and graduate research assistant levels. The funded project involves collaboration with radiologists, radiation oncologists, neuroscientists, neonatologists, pediatricians, neurologists and psychiatrists at Massachusetts General Hospital (MGH), Boston Children’s Hospital (BCH), and Dana Farber Cancer Institute (DFCI), all affiliated with Harvard Medical School. The project involves developing and using medical image analysis and machine learning algorithms to quantify normal neurocognitive development, to integrate MRI with radiation maps and clinical data, to understand the mechanisms and to predict adverse neurocognitive outcomes in patients undergoing treatment for brain tumor during childhood. The funded project is listed here https://www.stbaldricks.org/grants-search/researcherName/Yangming/grantPeriod/current/country/US/state/MA/city/boston/page/1/. The successful candidates will be in the final years of PhD or have a PhD degree in the BME, EE, CS, Applied Maths, psychiatry, neuroscience or related fields. Experience in machine learning and medical image analysis is preferred but not required. The recruite will be named “research fellow” (postdoctoral level) at Harvard Medical School and Boston Children’s Hospital. Visiting scholars from related fields are welcome. The new member will be working closely with Dr. Yangming Ou, who is currently in the process of being promoted from an Instructor (Research Assistant Professor) to Assistant Professor of Radiology at Harvard Medical School (expectd Jan. 2020). Dr. Ou is establishing a research lab focusing on medical image, informatics and intelligence (http://www.childrenshospital.org/research/researchers/o/yangming-ou). The team is part of the Fetal-Neonatal Neuroimaging and Developmental Science Center (FNNDSC, https://www.fnndsc.org) and is also affiliated with Computational Health Informatics Program (CHIP, http://www.childrenshospital.org/chip) at BCH. Members in Dr. Ou’s team (postdoc fellows, PhD students, and research assistants) work on MRI analysis and machine learning for abnormality detection, early screening of disorders, outcome prediction, treatment evaluation, as well as neuroimaging biomarkers for typical and atypical brain development. We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, gender identify, sexual orientation, pregnancy and pregnancy-related conditions or any other characteristic protected by law.
Link | Contact: Yangming Ou | Posted on: 2019-10-12 15:14:15 UTC
Post-doctoral research-fellow positions (2 years with possibility for extension) in computer vision and machine learning are available immediately. Topics of interest include video understanding, language and vision, pose estimation and other topics on high-level scene understanding and visual perception. Applicants should have a PhD in computer vision or a related area with a strong background in modern machine learning techniques as well as a strong publication record at first-tier CV/ML conferences and journals (e.g. CVPR/ECCV/ICCV/NeurIPS, IJCV/PAMI). Fluency in both written and spoken English is expected. Please apply by sending a single PDF a CV with a list of publications, a research statement and proposal (max 3 pages) on any of the topics indicated above or another area of computer vision and or machine learning and references from two academic referees. Opportunity to meet and discuss at ICCV.
Link | Contact: Angela Yao | Posted on: 2019-10-12 15:13:23 UTC
Since 20 years, Artefacto is an expert in virtual reality, augmented reality and 3D modeling. Within the Innovation team and under the authority of the Research & Innovation Manager, you will be involved in the upstream phases of preparing collaborative projects, national or European, and will then participate in software developments, writing deliverables and planned presentations and experimentations. Within the framework of the company's needs, you will participate in the state of the art, technological choices, prototyping of software components with the selected algorithms, while respecting the deadlines and defined quality processes (ISO 9001), with final objective to ensure the transfer of these components to other teams so that they can be industrialized and deployed in products and services. As part of the intellectual property implementation strategy, you may also be involved in mapping patentable technologies and filing them if necessary. (required skills: strong background in computer vision, artificial intelligence and sensors fusion; good knowledge on how to implement a consistent and high quality augmented reality rendering with managing occlusions, diminished reality, lighting, etc.; solid background in mathematics and geometry in space; C/C++, C#, Unity 3D, Shaders, OpenCV, etc.).
Link | Contact: Romain Cavagna | Posted on: 2019-10-11 14:12:56 UTC
Since 20 years, Artefacto is an expert in virtual reality, augmented reality and 3D modeling. Within the Innovation team and under the authority of the Research & Innovation Manager, you will be involved in the upstream phases of preparing collaborative projects, national or European, and will then participate in software developments, writing deliverables and planned presentations and experimentations. Within the framework of the company's needs, you will participate in the state of the art, technological choices, prototyping of software components with the selected algorithms, while respecting the deadlines and defined quality processes (ISO 9001), with final objective to ensure the transfer of these components to other teams so that they can be industrialized and deployed in products and services. As part of the intellectual property implementation strategy, you may also be involved in mapping patentable technologies and filing them if necessary. (required skills: strong background in implementing multi-user VR/AR immersive experiences; strong knowledge in software architecture, user interaction and rendering quality; good knowledge of scalable service platforms and data streaming; knowledge in motion capture and face tracking; C/C++, C#, Unity 3D, Shaders, Go, gRPC, etc.).
Link | Contact: Romain Cavagna | Posted on: 2019-10-11 14:12:36 UTC
Multiple positions at post-doctoral level are available in the Department of Computer Science at the University of Trento (UniTN), to work under the supervision of Prof. Nicu Sebe (http://disi.unitn.it/~sebe/) and Prof. Elisa Ricci (http://elisaricci.eu). The main research themes are: ● Theme A: Human Behavior Analysis. The activity will focus on designing novel algorithms and models for analyzing and understanding human behavior from visual data gathered by a robotic platform. Specifically, we expect the candidate to develop algorithms for behavioral cues extraction (e.g. gestures, facial expressions), human action recognition/anticipation and social interactions analysis. ● Theme B: Semantic Scene Understanding. The activity will focus on developing deep learning-based algorithms for semantic scene analysis. In particular, the research activity will focus on devising algorithmic solutions for pixel-level prediction tasks (e.g. semantic segmentation, depth estimation) and on their integration within tools for building semantic maps of indoor scenes. ● Theme C: Domain Adaptation and Continual Learning. The activity will build on previous works from the research group (Mancini et al., CVPR 2019; Berriel et al., ICCV 2019, Roy et al. CVPR 2019) and will focus on devising novel algorithms for domain adaptation and continual learning, with special emphasis on methodologies for video streams analysis. At the time of the application, eligible candidates should have a Ph.D. degree in Computer Science, Engineering, or related fields. Proven scientific track record on major computer vision and multimedia conferences/journals (CVPR, ICCV, ECCV, ACM Multimedia, TPAMI, IJCV, etc.) is a criteria for the selection as well as experience on deep learning algorithms and relevant platforms (e.g. TensorFlow, PyTorch, Theano, Caffe). The successful candidates will be offered a competitive salary commensurate to qualifications and experience. Please send the application, including a CV with publication list, brief description of research interests and names of two referees to Prof. Nicu Sebe (firstname.lastname@example.org) or Prof. Elisa Ricci (email@example.com) quoting “Postdoc in Computer Vision” in the email subject. The call will remain open until the positions are filled but a first deadline for evaluation of candidates will be November 1st, 2019. The positions are available for a minimum of one year, renewable up to three years.
Link | Contact: Nicu Sebe | Posted on: 2019-10-11 14:12:14 UTC
RESPONSIBILITIES 1. Work on improving fundamental algorithms of computer vision and Slam, understand the demands in the business scenario, promote the landing of project. 2. Work on prospective researches based on Slam、motion planning, provide guides junior researchers in exploring industry-related advanced technologies. 3. Participate in the development and implementation of employee development plans, such as mentoring teams in competitions. MINIMUM QUALIFICATIONS 1. Currently has or is in the process of obtaining a master degree or above in the fields of computer science. 2. 3+ years of working experience in well-known high-tech companies or universities. 3. Demonstrated experience in scientific research related to computer vision and Slam. 4. Skills in problem analysis, decomposition and expression; Being patient, meticulous, rigorous during working. 5. Advanced collaboration ability, being able to work closely with different groups, as well as other researchers. PREFERRED QUALIFICATIONS Experience leading a research team or being responsible to a project; First-authored publications at leading conferences or journals on artificial intelligence; Awards at leading competition on artificial intelligence.
Link | Contact: Eric.Yu | Posted on: 2019-10-11 14:11:48 UTC
RESPONSIBILITIES 1. Work on improving fundamental algorithms of computer vision and deep learning, understand the demands in the business scenario, promote the landing of project. 2. Work on prospective researches based on deep learning, provide guides junior researchers in exploring industry-related advanced technologies. 3. Participate in the development and implementation of employee development plans, such as mentoring teams in competitions. MINIMUM QUALIFICATIONS 1. Currently has or is in the process of obtaining a master degree or above in the fields of computer science. 2. 3+ years of working experience in well-known high-tech companies or universities. 3. Demonstrated experience in scientific research related to computer vision and deep learning. 4. Skills in problem analysis, decomposition and expression; Being patient, meticulous, rigorous during working. 5. Advanced collaboration ability, being able to work closely with different groups, as well as other researchers. PREFERRED QUALIFICATIONS Experience leading a research team or being responsible to a project; First-authored publications at leading conferences or journals on artificial intelligence; Awards at leading competition on artificial intelligence.
Link | Contact: Eric.Yu | Posted on: 2019-10-11 14:11:23 UTC
RESPONSIBILITIES： 1、 Work on computer vision/3D vision/image processing algorithm research and development. 2、 Participate in research and development of key algorithm platform and related tools. 3、 Improve our algorithm system on various application scenario, like Autopilot, Smart City, Smart Business, etc. and make innovation of our algorithm to face new challenges of product. MINIMUM QUALIFICATIONS: 1、 With Bachelor or degrees above, major in computer science, math, electronic engineering, etc. 2、 Proficient in coding, C or C++ are preferred. 3、 Good logical capability and software, algorithm coding ability. 4、 Good communication skills and team collaboration siprit. PREFERRED QUALIFICATIONS: 1、 ACM or other coding competition experience. 2、 Experience in related projects. 3、 Deep understanding of computer vision or image processing. 4、 Related internship or practice is preferred. 5、 Publications at leading conferences or journals on artificial intelligence
Link | Contact: Eric.Yu | Posted on: 2019-10-11 14:10:54 UTC
What you'll do: Responsible for key technologies such as face recognition, large-scale face retrieval/clustering. What we are looking for: 1. MS or above, familiar with face recognition theory and algorithms, especially for large-scale face recognition, high-concurrency facial feature retrieval; 2. Good knowledge of AI training and deployment process.Ability to work independently, interpersonal and teamwork skills; 3. Solid programming skills and strong capability with C/C++, Python programming, proficient in deep learning product performance optimization; Bonus Points: 1. With experience in Tencent Youtu, SenseTime, Face++, Sensing Tech, Baidu, Ali, Hikvision, Yitu, Dahua,etc; 2 International papers with face recognition, or competition experience in the field of face recognition such as Megaface, NIST, etc.
Link | Contact: Eric.Yu | Posted on: 2019-10-11 14:10:34 UTC
Job responsibilities 1. Responsible for the development of intelligent driving strategy algorithms and promote product landing 2. Tracking strategy algorithm research and development progress, promoting strategy algorithm performance to maintain world leading 3. Guide the team's internal technical development route to maximize team output 4. Security policy, security, stability, reusability, and scalability 5. Research new technologies related to the project Claim: 1, computer related majors, master's degree or above 2. More than five years experience in computer vision-related strategy algorithm development and design, more than three years of experience in ADAS industry development; experience with ENCAP or related industry standards is preferred. 3. Proficiency in monocular vision principles, various filtering algorithms and motion models. 4, Proficient in C++ development, familiar with Linux development; experience in embedded device development, deployment and debugging is preferred 5, Good at communication, focus on sharing, focus on teamwork, good team awareness and collaborative spirit, strong self-driven and internal and external communication skills
Link | Contact: Eric.Yu | Posted on: 2019-10-11 14:10:15 UTC
We're looking for a postdoc to perform research on and help coordinate a team of 6 PhD students working on aspects of audiovisual perception and multimodal representations, with the goal to integrate visual, audio and textual inputs towards general scene understanding, and build a virtual companion demo (i.e., "a camera you can talk to"). The candidate should have a proven track record in research (publications at top computer vision/machine learning venues), and strong communication skills.
Link | Contact: Tinne Tuytelaars | Posted on: 2019-10-11 14:09:58 UTC
SenseTime is a global company focused on developing innovative AI technologies that positively contribute to economies, society and humanity. Come and join us! As a research scientist, you will work with world-class talented researchers to design innovative algorithms related to computer vision and deep learning and build artificial intelligence solutions for vertical applications, such as video surveillance, automatous driving, mobile, retails and etc.
The operating room is a high-tech environment in which the surgical devices generate a lot of data about the underlying surgical activities. Our research group aims at making use of this large amount of multi-modal data coming from both cameras and surgical devices to develop an artificial intelligence system that can assist clinicians and staff in the surgical workflow. In this context, we currently have a new PhD position at the University of Strasbourg that will focus on developing machine learning and computer vision methods to understand the scene of the operating room, recognize the human activities, and analyze the workflow. The project will use as input multi-view RGB-D videos capturing surgical activities. As this PhD position is funded by a fellowship from Intuitive Surgical, the successful candidate will have the opportunity to interact with researchers from Intuitive Surgical and also to conduct internships at the company in Sunnyvale, California.
The candidate will be 1) responsible for video analysis and content creation in the field of live broadcasting; 2) able to do survey, innovation, implementation, and optimization in cutting-edge AI technologies. Qualifications: 1)Experience in pattern recognition/computer vision/machine learning. 2)Masters or PhD in Computer Science, Mathematics, Statistics or related fields with strong mathematics background. 3)Self-driven and strong problem solving skills. 4)Familiar in C/C++/Python, as well as CV libraries (OpenCV, Numpy) and deep learning frameworks (TensorFlow, PyTorch). 5)Proficiency in mainstream CV algorithms, hands-on experience in performance optimization is a plus. 6) Experience with generative models such as GAN, VAE, Glow, etc. is a plus. 7)Publications in top CV conferences(CVPR, ICML, AAAI, NIPS, ECCV, ICCV, etc.) is a plus.
Link | Contact: Finn Wong | Posted on: 2019-10-11 14:08:11 UTC
Level 5 is looking for doers and creative problem solvers to join us in developing the leading self-driving system for ridesharing. Our team members come from diverse backgrounds and areas of expertise, and each has the opportunity to have an outsized influence on the future of our technology. Our world-class software and hardware experts work in brand new garages and labs in Palo Alto, California, and offices in London, England and Munich, Germany. And we're moving at an incredible pace: we're currently servicing employee rides in our test vehicles on the Lyft app. Learn more at lyft.com/level5. This newly formed team will develop new experimental solutions that combine the latest findings in cutting-edge computer vision, deep learning and large-scale data processing to advance the capabilities of our existing systems and to advance the state-of-the-art of the field. Responsibilities: -Work in a small, high-velocity team of engineers and researchers -Design and prototype new computer vision and deep learning solutions -Develop case studies and experimentally validate hypotheses Collaborate with AV engineering teams in productionizing systems -Advance the state-of-the-art, publish and represent Level 5 at top-tier conferences (e.g. CVPR, NIPS, ICCV, RSS, ICRA) Experience & Skills: - Hands-on deep learning experience (deep learning, reinforcement learning, GAN, autoencoders etc.) - Experience publishing at state-of-the-art conferences (e.g. CVPR, NIPS, ICCV, RSS, ICRA)
Responsibility: Responsible for research and development of CG related AI products; Contribute in the cutting-edge exploration and implementation of high quality content generation and interactive technology. Required: 1) Solid background in computer graphics and good practical abilities. 2) Experience in character rigging, motion capture, or 3D modeling. 3) Familiarity with animation software (Maya etc.). Experience of physics based animation and modeling is a plus. 4) Proficiency in C++/Python with good coding style. Experience with related libraries (OpenGL/Direct3D) is preferred. 5) Knowledge of state of the art graphics research, track record in SIGGRAPH, CVPR, ICCV, ECCV etc. is preferred. 6) Passionate about AI and CG technology, and strive for excellence.
Link | Contact: Qing Wang | Posted on: 2019-10-11 14:07:05 UTC
The Artificial Intelligence Research Centre is a new section within Group Technology and Research (GTR) established in Shanghai in early 2019. GTR currently runs seven other research programs: four are oriented towards the main industries we serve: Maritime, Oil & Gas, Power & Renewables and Precision Medicine. Three programs address cross-industry challenges: Digital Assurance, Ocean Space and Energy Transition. The new section will focus on building competence and prototypes in the area of perception AI with the aim to use machine and deep learning applied to computer vision in the industries served by DNV GL. We currently have a number of positions available. Visit our group website for details of research focus: https://www.dnvgl.com/technology-innovation/artificial-intelligence/index.html. Position Responsibilities: The successful candidate will manage and execute research projects within the Artificial Intelligence Research Centre. He/She will report to the head of the section and be responsible for the following tasks: - Develop algorithms for detecting physical industrial asset condition and anomalies and using these for enhancing audit/inspection/survey and assurance services - Enhance algorithms to handle multi-sensor input - Create or enhance existing DL algorithms to be capable of detecting patterns in images without background - Develop concepts for assurance of advanced IoT devices comprising cameras and DL algorithms - Manage and execute technology development projects - Author technical reports, scientific papers, non-technical reports and professional presentations Position Qualifications The successful candidate should have Ability to deliver project results within agreed schedules Ability to articulate work results to internal and external stakeholders Experience in software development, particularly machine learning techniques in computer vision Good command of English Master or PhD Degree in relevant area, preferably computer science Company & Business Area Description DNV GL is a global quality assurance and risk management company. Driven by our purpose of safeguarding life, property and the environment, we enable our customers to advance the safety and sustainability of their business. We provide classification, technical assurance, software and independent expert advisory services to the maritime, oil & gas, power and renewables industries. We also provide certification, supply chain and data management services to customers across a wide range of industries. Combining technical, digital and operational expertise, risk methodology and in-depth industry knowledge, we empower our customers’ decisions and actions with trust and confidence. We continuously invest in research and collaborative innovation to provide customers and society with operational and technological foresight. With origins stretching back to 1864 and operations in more than 100 countries, our experts are dedicated to helping customers make the world safer, smarter and greener. Equal Opportunity Statement DNV GL is an Equal Opportunity Employer and gives consideration for employment to qualified applicants without regard to gender, religion, race, national or ethnic origin, cultural background, social group, disability, sexual orientation, gender identity, marital status, age or political opinion. Diversity is fundamental to our culture and we invite you to be part of this diversity!
Join our Vision Technologies and Solutions Group (VTS RG) to develop solutions to real-world computer vision problems where there is limited amount of training data for your machine learning algorithms. The CT Simulation and Digital Twin Technology Field (SDT TF) is seeking a highly motivated Master/PhD student available for an internship in the area of semi-supervised/unsupervised methods for object recognition/pose estimation and semantic segmentation. The project will involve analysis of state-of-art in academia and industry, and design of novel practical techniques to address challenging problems in autonomous systems such as autonomous driving trains or autonomous robots.
Join our Vision Technologies and Solutions Group (VTS RG) to develop solutions to real-world computer vision problems where there is limited amount of training data for your machine learning algorithms. The CT Simulation and Digital Twin Technology Field (SDT TF) is seeking a highly motivated research engineer with a focus on in the area of synthetic data augmentation for bridging realism gap. This role will involve analysis of state-of-art in academia and industry, and design of novel practical techniques to address challenging problems in autonomous systems such as autonomous driving trains or autonomous robots.
Join our Vision Technologies and Solutions Group (VTS RG) to develop solutions to real-world computer vision problems where there is limited amount of training data for your machine learning algorithms. Our team is seeking a highly motivated research scientist/professional with a focus on in the area of semi-supervised/unsupervised methods for object recognition/pose estimation and semantic segmentation. This role will involve analysis of state-of-art in academia and industry, and design of novel practical techniques to address challenging problems in autonomous systems such as autonomous driving trains or autonomous robots.
We have two postdoc positions available. One position is in computer vision (open to all research areas, solid backgrounds on low-level/mid-level vision would be a plus), the other is a joint postdoc with Prof. David Whitney @ whitneylab.berkeley.edu (interests and experience in human vision and medical image search would be a plus). The ideal candidate shall have publications at top computer vision / machine learning venues, with strong math / debugging / communication skills. Please introduce your research and career plans upon inquiries.
Apple’s Camera & Photos team is seeking highly qualified Ph.D. students for the coming summer to work on challenging problems related to computer vision and computational photography for the most popular camera in the world, iPhone. The Camera & Photos team focuses on user-experience by using computer vision and image processing through machine learning. You will be working within the Camera Technologies team on the cutting-edge camera and vision technologies for iPhones and iPads, specifically, low-level image restoration/enhancement and multiple-camera fusion with the potential to deliver the developed features into hands of millions of our customer. Key Qualifications: - In-depth expertise in computer vision and deep learning, expertise in computational photography is a plus - Strong publication in CVPR/ICCV/ECCV or other top vision/learning conferences - Good presentation skills - Passionate about building extraordinary products - Excellent programming skills in Python, C/C++ is a plus - Knowledge of common ML frameworks
Link | Contact: Feng Li | Posted on: 2019-10-05 07:27:45 UTC
Are you passionate about artificial intelligence, computer vision and health? We are seeking six PhD candidates for our new lab on AI for Medical Imaging (AIM lab), a research collaboration between the University of Amsterdam (the Netherlands) and the Inception Institute of Artificial Intelligence (United Arab Emirates). The research lab will be focused on medical image analysis by machine learning, covering active scientific topics of broad interest, including both methods and applications. Those topics range from low-level vision and data pre-processing tasks, to high-level image/video analysis tasks. From a technical perspective, we will be working on fundamental and relatively general deep learning models and algorithms, which will be applied to specific diseases, including but not limited to Alzheimer’s disease, cancer and cardiovascular diseases.
iPhone is the most popular camera in the world. The seamless integration of software and hardware has led to features like Memories and Portrait Mode which deliver experiences that are magical. The Camera & Photos team focuses on user-experience by applying computer vision and image processing through machine learning. Combining state of the art software techniques with next-generation hardware, the Camera Software team takes the mobile photography experience to the next level. Do you have deep working knowledge of media video and imaging? Do you thrive in dynamic work environments, and yearn to ship amazing products that are enjoyed by millions of people? Join Apple’s Camera GPU Performance team, to help us implement and optimize image processing algorithms on GPU using Metal. Work closely with the capture and imaging teams on implementation, integration, and optimization of vision and computational photography algorithms. An ideal candidate will have 3+ years experience working as part of a software development team, familiarity with C/C++/Objective C, Unix/Linux, embedded systems, performance optimization techniques, and/or GPU programming.
Our Mission Scape seeks to unify human-machine understanding by connecting the physical and digital world. The Role Scape is building the 3D map of the world: the platform on top of which all future industries, like Autonomous Vehicles, Robotics and Augmented Reality will sit. For this map to truly allow intelligent decision making, it will need to contain an abundance of semantic information. Our vision is that there will be a dedicated team whose sole focus is the enhancement of this feature; this process starts with the appointment of a Lead Machine Learning Engineer who we can then build this team around. You will start with the analysis and enhancement of existing solutions to fit the current use-cases. You will then be tasked with building out a data mining strategy, setting up a labelling process and assisting in the expansion of the team around you. The team will further develop a proprietary solution for semantic detection and classification and will work towards the productization and perpetual improvement of set solutions. Minimum qualifications MSc in Computer Science, Engineering or Mathematics 2+ Years industrial experience in productization of Machine Learning models A deep understanding of Convolutional Neural Networks and related problems Exceptional data mining skills and experience managing a data labelling process Great knowledge of Python and basic SQL Strong communications skills Proven track record in writing structured software from scratch Preferred qualifications PhD in Deep Learning based Computer Vision Experience leading and growing a machine learning team Experience with AWS services, including AWS SageMaker Experience with terabyte scale computer vision datasets Who we are Scape Technologies is a computer vision startup, located in Shoreditch, London. The company is building a cloud-based ‘visual engine’ that allows camera devices to understand their environment, using computer vision. Rather than rely on 3D maps built and stored locally, Scape's visual engine builds and references 3D maps on the cloud, allowing devices to tap into a ‘shared understanding’ of an environment. The first product is an SDK for mobile devices that allows augmented reality content to be anchored to specific locations, outside and at an unprecedented scale. The company was founded in 2016 and is backed by top European venture funds. Our Culture & Values Scape Technologies is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. The company celebrates being 'In it Together', 'Restless Self-Improvement' and 'Building the Future'.
Our Mission Scape seeks to unify human-machine understanding by connecting the physical and digital world. The Role As a Computer Vision Researcher at Scape you will work on Image Matching and/or Large Scale Image Retrieval with the purpose of developing novel state-of-the-art methods and publishing papers in major computer vision and machine learning conferences. You will be able to collaborate cross-functionally with a wide engineering team to analyse and extract meaning out of one of the largest datasets available in the industry. In addition, your research will have direct impact on Scape’s mapping and localization pipeline. Minimum Qualifications Minimum qualifications PhD in Computer Vision, ideally with a focus on image matching and/or image retrieval. Existing publications as a first author in top CV/ML conferences such as CVPR, ICCV, ECCV, NeurIPS. In-depth understanding of machine learning for computer vision. Proficient in Python and deep learning frameworks (e.g. Pytorch, Tensorflow). Preferred Qualifications Open source research projects published as part of PhD work. Familiarity with Git and C++ Please include a link to your publications in Google Scholar. Who we are Scape Technologies is a computer vision startup, located in Shoreditch, London. The company is building a cloud-based ‘visual engine’ that allows camera devices to understand their environment, using computer vision. Rather than rely on 3D maps built and stored locally, Scape's visual engine builds and references 3D maps on the cloud, allowing devices to tap into a ‘shared understanding’ of an environment. The first product is an SDK for mobile devices that allows augmented reality content to be anchored to specific locations, outside and at an unprecedented scale. The company was founded in 2016 and is backed by top European venture funds. Our Culture & Values Scape Technologies is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. The company celebrates being 'In it Together', 'Restless Self-Improvement' and 'Building the Future'.
Post-doctorate topic: Semi and Weakly supervised Convolutional Networks for jointly people detection and tracking In this proposal, we focus upon the problem of joint people detection and tracking in computer vision, which is a foundation stone for several applications such as health monitoring and autonomous vehicles. Most contemporary systems for people detection and tracking make use of fully-supervised techniques. One downside of fully-supervised machine learning techniques is their need for large amounts of annotated data. Data annotation is a time-consuming and costly undertaking which suffers from several technical challenges such as inter and intra-annotator variance. Thus, dependence on fully-supervised techniques is unable to take advantage of large amounts of unannotated data in the form of videos which are available easily in this digital age. In comparison to fully annotated datasets, unannotated datasets are much difficult to work with; begin devoid of any supervisory information in the form of ground-truth. One practical approach towards reducing one’s dependence on extensive annotations is to build systems, which can learn effectively from partial or incomplete annotations. There are two paradigms of machine learning, which deal with partial and incomplete annotations – semi-supervised learning and weakly supervised learning. In the context of joint detection and tracking of people, semi-supervised learning models the scenario where the training data is annotated with bounding boxes and tracking information for a subset of people (i.e. all people in an image or video are not annotated). Weakly supervised learning considers the scenario where, no bounding box or tracking information is available for any person; only a label providing information as to whether a person exists in a given frame or not is provided. A potential solution to these two scenarios has major ramifications as they preclude a complete and exhaustive annotation of large amounts of data. This translates into major cost savings, not to mention access to a practical way to harness large amounts of unannotated data available. http://univ-cotedazur.fr/institutes/3ia/3ia-post-doc-offers-offres-de-postdoc-3ia
The successful applicant will apply machine learning and other AI methods to large datasets of naturalistic data to solve traffic safety issues, typically related to human factors in automation. The post-doc will have the unique opportunity to work in three different research groups at three different departments within Chalmers to carry over this interdisciplinary project where behavioral science, machine learning, and video analysis come together to improve transport. This post-doc will develop and apply artificial intelligence algorithms to large naturalistic datasets in order to analyze and model how driver behave in traffic in safety-critical and non-safety-critical situations. Large part of the work will include the extraction and analysis of human behavior from video including, as an example, posture, glance behavior, and secondary tasks to driving.
We are interested in representation learning, especially learning features from images and videos. Representation learning is one important aspect of deep learning and machine learning, and good representations of input data are essential for the generalization ability, interpretability and robustness of machine learning methods.
AI Singapore (AISG) is a national programme launched by the National Research Foundation (NRF). It brings together all Singapore-based research institutions and the vibrant ecosystem of AI start-ups and companies to perform use-inspired research, grow the knowledge, create the tools, and develop the talent to power Singapore's AI efforts. As an AI Scientist or Scientific Officer, you will conduct desk research to identify and propose research domains and topics of significant impact and priority. Working in close interaction with academic researchers and industry partners, you will assist AI Singapore leadership in shaping future grant programmes as well as reviewing and evaluating proposal submissions. You will also have the opportunity to engage in state-of-the-art research and contribute to the development of open source systems.
Qualifications: A postdoctoral fellow is now available for a qualified candidate in the Sung Lab, Magnetic Resonance Research Labs (MRRL), Department of Radiological Sciences, UCLA. We are looking for a highly motivated individual to conduct research in the area of Machine Learning and/or Image Analysis for detecting and diagnosing prostate cancer. Our current projects include 1) semi-supervised MRI-based prostate cancer prediction with deep generative learning, and 2) improved correlation of prostate multi-parametric MRI with histologic findings. The specific project will be determined based on the interests of the mentor and candidate. The successful candidates will participate in all stages of investigation including theoretical advancements, experimental design, and data collection and analysis. The position requires strong verbal and written communication skills, and the ability to interact with others with diverse areas of expertise. Requirements: We seek highly qualified individuals who are highly motivated, flexible, detail-oriented, collaborative, and hold a commitment to research excellence. Candidates should have a recent PhD (or soon-to-be conferred) in Computer Science, Electrical Engineering, Biomedical Engineering, or a related field and have strong experience in at least one of the following areas: machine learning, deep learning, or image analysis. Candidates with previous research experience working with medical images are highly desirable but not required. Environment: Magnetic Resonance Research Labs (MRRL) in the Department of Radiological Sciences at the University of California Los Angeles has six faculty members and more than 30 staff, students, and postdoctoral fellows. MRRL has access to three clinical research Siemens MRI systems (3T Prisma, 3T Skyra and 1.5T AvantoFit systems), as well as a dedicated research-only MRI facility (3T Prisma system). For more information, please visit http://mrrl.ucla.edu/sunglab/ To apply, email a single PDF-file containing the following: curriculum vitae (CV), statement of research experience and interests, and contact information for two references with the subject heading “Postdoctoral Fellow” to: Kyung Sung, Ph.D. (KSung@mednet.ucla.edu). Salary will be based on University guidelines for postdoctoral fellows and will be commensurate with experience. The University of California is an affirmative action/equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, age, or protected veteran status. For the complete University of California nondiscrimination and affirmative action policy see: UC Nondiscrimination & Affirmative Action Policy.
We are a fast-growing autonomous driving startup. We aim to accelerate the development of autonomous driving technologies by more effective ways of using data and doing test. We are a team of innovators in practice. We are looking for passionate, talented, and creative engineers/researchers/interns with strong computer vision background to help build world-leading AI platforms for autonomous driving. As part of our AI team, you will work alongside an all-star team to develop novel algorithms and computer vision techniques to advance the state-of-the-art in visual perception and robotics. You’ll learn new technologies while working on code, algorithms, and research in your area of expertise and in the new area that you are willing to explore. If you are passionate about autonomous driving and believe computer vision can do many more things than now, then you must apply!
We are a fast-growing autonomous driving startup. We aim to accelerate the development of autonomous driving technologies by more effective ways of using data and doing test. We are a team of innovators in practice. We are looking for passionate, talented, and creative engineers/researchers/interns with strong machine learning background to help build world-leading AI platforms for autonomous driving. As part of our AI team, you will work alongside an all-star team to develop novel algorithms and machine learning techniques to advance the state-of-the-art in visual perception and robotics. You’ll learn new technologies while working on code, algorithms, and research in your area of expertise and in the new area that you are willing to explore. If you are passionate about autonomous driving and believe machine learning can do many more things than now, then you must apply!
This research intern position is offered at the STARS team in INRIA, Sophia Antipolis. This position is offered for a period of 6 months with a focus on large-scale object detection. For more information about the position and for instructions to apply, please click on the link to the Google docs and follow the instructions therein.
We are seeking a highly motivated post-doc researcher in the field of computer vision and machine learning. Candidates with a strong record of accomplishments in computer vision and machine learning are encouraged to apply. Apply by sending an e-mail to
As a research scientist at CosmicBC, you will build ML models and algorithms to perform many investment tasks to achieve improvement in terms of return. We are looking for the AI researchers interested in developing new theories and algorithms on the following areas: Unsupervised / semi-supervised / self-supervised learning Meta-Learning: few-shot learning Domain adaptation / knowledge transfer Deep Reinforcement Learning Time-series Analysis Quantitative investment
To work on recovery of hand pose from images
Link | Contact: Liu Ying | Posted on: 2019-09-29 16:22:05 UTC
The Bradley Department of Electrical and Computer Engineering at Virginia Tech seeks applications for one or more tenured/tenure-track positions in Computer Engineering, at the rank of Assistant or Associate Professor, specifically in the area of machine learning for autonomous systems, including (but not limited to) deep learning, knowledge representation & reasoning, reinforcement learning, reasoning under uncertainty, and their applications to perception in robotics and autonomy that is terrestrial, space-based or naval in scope. This position(s) will be based in Blacksburg, Virginia or the Greater Washington, DC, metro area.
Design, implement and validate algorithms to support robust and scalable mapping pipelines for Argo’s self-driving cars. In addition, you will also design and implement necessary metrics and tools
Description: The Mathematical Data Science Lab under Prof. Ehsan Elhamifar at Northeastern University's Khoury College of Computer Sciences is inviting applications for Postdoctoral Researchers. We are seeking highly motivated postdoctoral researchers with strong background and interests in computer vision and machine learning to join our group in a quest to perform breakthrough research on approaches that advance the following projects. Projects in computer vision include procedure learning from instructional videos, large-scale image and video recognition with no/few labels, and structural deep learning for weakly supervised segmentation and recognition. Projects in machine learning include structured dynamic data summarization, streaming multi-label zero-shot and few-shot learning, and scalable discrete optimization algorithms for robust machine learning. Qualifications: A recent Ph.D. in Computer Science, Electrical and Computer Engineering, Statistics, or related fields. Expertise in machine learning and/or computer vision and strong skills in Python, MATLAB, PyTorch/TensorFlow. Start Date and Duration: The preferred start date is January 2020. The position is for one year, with the possibility of extension up to three years. Salary would be commensurate with experience. Applications: 1. Please email a cover letter and a CV to e.elhamifar [at] northeastern [dot] edu as a single PDF file with the string "[MLCV Postdoc]" at the beginning of the subject line. 2. Arrange for reference letter to be sent directly to e.elhamifar [at] northeastern [dot] edu. For questions about the position and applications, please contact Dr. Elhamifar (e.elhamifar [at] northeastern [dot] edu).
UII America, Inc., a subsidiary company of Shanghai United Imaging Intelligence Healthcare Co. Ltd. (UII), is building an organization of highly-motivated, talented and skillful AI experts and software developers to strengthen our R&D power and address the need of our innovative products in the USA market. United Imaging Intelligence (UII) is committed to providing AI solutions for medical devices, imaging, and diagnosis – to helping clients better understand and embrace AI. United Imaging Intelligence is led by two world-renown leaders in the AI industry. Together, they will lead UII in focusing on “empowerment” and “win-win.” UII empowers doctors and equipment in order for doctors and hospitals to win, for research institutions to win, and for third-part companies to win. UII America, Inc. is building a world-class research and development team in Cambridge, MA. We have an immediate opening for a Computer Vision Research Intern who can work full-time/part-time with the following qualification requirements: Currently pursuing an MS or PhD Degree in Computer Science, Computer Engineering, Biomedical Engineering, Statistics, Applied Mathematics, or other related fields; Self-motivated and demonstrated problem solving and critical thinking skills; Familiarity with at least one mainstream deep learning toolkit, e.g., Pytorch, Tensorflow; Experience using Python and OpenCV; Proven track record of publications in the top computer vision venues is a plus; Experience with 6D pose estimation or 3D visual data processing is a plus; Good communication skills and team-work spirit.
Link | Contact: Ziyan Wu | Posted on: 2019-09-28 15:25:28 UTC
UII America, Inc., a subsidiary company of Shanghai United Imaging Intelligence Healthcare Co. Ltd. (UII), is building an organization of highly-motivated, talented and skillful AI experts and software developers to strengthen our R&D power and address the need of our innovative products in the USA market. United Imaging Intelligence (UII) is committed to providing AI solutions for medical devices, imaging, and diagnosis – to helping clients better understand and embrace AI. United Imaging Intelligence is led by two world-renown leaders in the AI industry. Together, they will lead UII in focusing on “empowerment” and “win-win.” UII empowers doctors and equipment in order for doctors and hospitals to win, for research institutions to win, and for third-part companies to win. UII America, Inc. is building a world-class research and development team in Cambridge, MA.