Home > Mobile >  Create word lists from medical journals
Create word lists from medical journals

Time:06-16

I have been asked to compile a crossword for a surgeon's publication, - it comes out quarterly. I need to make it medically oriented, preferably using different specialty words. eg some will be orthopaedics, some cardiac surgery and some human anatomy etc. I can get surgical journals online.

I want to create word lists for each specialty and use them in the compiler. I will use crossword compiler .

I can use journal articles on the web, or downloaded pdf's. I am a surgeon and use pandas for data analysis but my python skills are a bit primitive so I need relatively simple solutions. How can I create the specific word lists for each surgical specialty.

They don't need to be very specific words, so eg I thought I could scrape the journal volume for words, compare them to a list of common words and delete those leaving me with a technical list. May require some trial and error. I havent used beautiful soup before but willing to try it.

Alternatively I could just get rid of the beautful soup step and use endnote to download a few hundred journals and export to txt.

Its the extraction and list making I think i am mainly struggling to conceptualise.

CodePudding user response:

I created this program that you can use to parse through a .txt file to find the most common words. I also included a block of code that will help you to convert a .pdf file to .txt. Hope my approach to the solution helps, good luck with your crossword for the surgeon's publication!

'''
Find the most common words in a txt file
'''

import collections
# The re module provides regular expression matching operations
import re
'''
Use this if you would like to convert a PDF to a txt file
'''
# import PyPDF2
# pdffileobj=open('textFileName.pdf','rb')
# pdfreader=PyPDF2.PdfFileReader(pdffileobj)
# x=pdfreader.numPages
# pageobj=pdfreader.getPage(x-1)
# text=pageobj.extractText()

# file1=open(r"(folder path)\\textFileName.txt","a")
# file1.writelines(text)
# file1.close()

words = re.findall(r'\w ', open('textFileName.txt').read().lower())
most_common = collections.Counter(words).most_common(10)
print(most_common)
  • Related