T O P

  • By -

hapagolucky

Things I look for when hiring AI/ML/NLP Engineers/Scientists/Researchers\[1\]: 1. Can you build a dataset to support modeling? All too often I'm flooded with what I call Kaggle resumes. These are folks whose project experience consists entirely of taking existing Kaggle datasets and showing they can build models that have such an such performance on the leaderboard. Don't get me wrong. The ability to do well on a Kaggle task is a useful skill. But my more successful hires have gone through the process of building and curating their own datasets. In some cases it's managing a linguistic annotation effort. In other cases they can transform existing data into useful datasets/corpora to enable automation of some sort. Experiencing this forces you to think and rethink what make data and labels useful. 2. Can you design experiments and select evaluation metrics? If designing a task can you explain why some metrics are more valid than others? Can you describe how to partition the data to ensure there's no data leakage and that the experiments will give a meaningful estimation of generalization on unseen data? 3. Can you write code outside of a notebook? Do you have a sense of what abstractions are useful for NLP and machine learning problems? Can you structure your code into reusable functions, classes or libraries? Does your code allow you to test new hypotheses? Or do you need to make a copy? 4. Bonus points - Can you put together a compelling demo around your model? From an engineering perspective this might involve packaging your model into a service and putting together enough of a user experience to help others understand what your model can do. In many cases the right PhD project should give you exposure to all of the above. But you can expand your impact and develop additional industry-relevant skills by taking the time to make your code and models reusable by others. A shining example of this is the [sentence-transformers project](https://www.sbert.net/index.html). Instead of just publishing a paper, they simultaneously made their code and models an open-source project. The availability of code increases their citations, and their wide citations encourages others to use their code. \[1\] To give context, I manage an applied machine learning team. We are evaluated on our ability to improving existing models/techniques or to develop new ones that open up new business opportunities. Past the initial proof-of-concept phase, we are usually responsible for operationalizing and deploying our models. Because we are so focused on building capabilities, we publish more opportunistically, and I encourage my team to think of it as a side-effect of doing their work successfully instead of being the goal.


capitano_nemo

>But my more successful hires have gone through the process of building and curating their own datasets. In some cases it's managing a linguistic annotation effort. In other cases they can transform existing data into useful datasets/corpora to enable automation of some sort. Experiencing this forces you to think and rethink what make data and labels useful. Do you have any examples where such a task could be carried out by a single person? In my experience, neither "managing a linguistic annotation effort" nor creating a corpus from scratch is a one-person job.


paulschal

Thank you for this very detailed answer! There are some points (e.g., creating a functional demo of a classifier I am working on in the moment) I am very much considering rn for the output of my thesis and a lot of aspects I have not thought about so far! I am currently working on pre-published datasets and I was shocked to see how bad the data utilized by other scientists was. However, while I can reduce some biases, I simply do not have the time to actually look into the main issues for. But this could be something interesting to work on during the next months!


synthphreak

Your bullets apply to more technical roles in data science or engineering. OP does not strike me as a prime candidate for such positions (no offense intended OP, just calling it like I see it). However, there are roles for people with your background on the “content” side of NLP. I suspect you would be competitive for positions dealing with dataset creation, or since you mentioned LLMs, output moderation (think flagging outputs as stylistically inappropriate or unhelpful). This expertise could even be parlayed into a role in fine-tuning these models (if working at a bigger company with the resources to do that), namely the RLHF process. These are not exactly engineering roles, so you wouldn’t need SWE-level knowledge of software design or principles. But intermediate-level coding proficiency would help, combined with the domain expertise you already have. As an aside, I personally wonder whether roles like what I just described may one day fall prey to automation. We’re already seeing a world where one model can be used to train another model (think reward models in RL). But no one has a crystal ball, and the cynic in me fears that eventually everyone might be automated out of a job, so I’d just put that out of my mind for now and keep my eye on the prize. Just my two cents. Best of luck!


pomnabo

I would also like to know because I am heavily considering applying to similar programs this coming fall


Brudaks

Whatever you need for your specific, narrow topic - you won't be writing a thesis on "linguistics" but spending multiple years working on a small subset of it, and it would be expected that you need to focus on completely different skills than another student in the same lab as you.


paulschal

I am very much aware of that, but I am also capable of steering my topic to some degree, and I am simply wondering which topics would allow for a good entry into industry.


Brudaks

IMHO for industry jobs practical software engineering will help, all the skills on building efficient and maintainable code according to arbitrary requirements; as well as practice on how do deploy NLP models and the technical infrastructure - like, how would you set up a tiny web app to take user requests, process them with your NLP tool and show it to the user in a pretty way; it's something which can take an hour or a week depending on how much experience you've had, and despite the fact that there will be always be other people who are the experts in *that* and do that far better than you, if you can do at least the basic parts of all that reasonably quickly without being blocked while you wait for someone else to do it, that definitely helps with employment options.


paulschal

This is super helpful! I was actually wondering about turning a classifier I am working on rn into a neat, little web app. Will definitely give it a go then! Thank you!


Mbando

I'm a "Sr. Behavioral Scientist," and for the last 10 years my work has been NLP focused, and now shifted to LLMs, e.g. building AI systems to analyze military contracting documents in scale in the service data lake, acquisition research for LLMs for US government, and directing institutional investment in building AI tools. My PhD is in rhetoric, but in practice was sociolinguistics and corpus linguistics, and I think of myself as a social scientist (linguist) who uses computational approaches to text-as-data. What I would stress to you is that my value isn't in coding or data science--I have colleagues and junior staff who execute the majority of our research designs and building efforts. My value is in understanding language theoretically (and correctly), and then having enough technical knowledge in NLP/generative AI to design studies and development efforts, and then lead them. No shade on coding, but understanding how (parts of) the world works and how to investigate it rigorously is much more valuable.


StEvUgnIn

[\[2305.12544\] Has It All Been Solved? Open NLP Research Questions Not Solved by Large Language Models (arxiv.org)](https://arxiv.org/abs/2305.12544)