반응형
반응형

[python]  TypeError: expected string or bytes-like object

 

 

TypeError: expected string or bytes-like object

 

정규표현식을 사용하여 findall을 했는데, 타입에러(TypeError: expected string or bytes-like object)가 떴다.

분명 list에서도 잘 뽑아주고 정규표현식도 맞는데 왜 계속 에러가 나는거지.. 했는데 추출을 할 리스트가 string형식이 아니어서 string으로 변환. 

param1 = re.findall("[0-9]{5}", str(key))
param2 = re.findall("[A-Z]{1}[0-9]{3}", str(key))
반응형
반응형

matplotlib colormap

colormap에 대한 설명을 찾기가 어렵다.
자세하게 설명을 붙여서 정리하고 싶은데, colormap 사용법은 많은데 설명은 없다.
matplotlib 소스코드를 분석하는 중에
colormap을 알아야 하는 상황에 처하게 되서 부족하지만 정리해 본다.

 

내가 생각할 때는
숫자를 색상에 매핑시키기 위한 색상 지도이다.
다만 테이블처럼 2차원 형태가 아니라 30cm 자와 같이 1차원 형태의 색상 배열이다.

갖고 있는 데이터를 그래프에 표시할 때
일반적인 색상을 사용해서 표시하는 것은 의미가 없다.
데이터를 색상으로 표현하는 것은 말이 되지 않는다.
그러나, 잘 정리된 colormap을 사용한다면 시각적으로 엄청난 효과를 거둘 수 있다.

가령, 지구를 평균 기온에 따라 표현한다고 했을 때
추운 지역은 파란색으로, 더운 지역은 빨간색으로 표시할 수 있다.
결국 지구상의 모든 영역은 파랑과 빨강이 연결된 그라데이션에 포함된 어떤 색상이 된다.
이와 같이 특정 데이터에 대해 사용될 수 있는 색상표를 colormap이라고 부른다.

 

# matplotlib 도움말

 

# 과학 데이터를 표시하기에 적합한 colormap 모음.
미국 지도에 인구 밀도에 따른 색상으로 표시한 그래픽은 압권.

 

# colormap 한글 설명.
많은 설명은 아니지만, 일부 매핑에 대해 상세한 설명을 제공한다.



출처: https://pythonkim.tistory.com/82 [파이쿵]

반응형
반응형

Gensim word vector visualization of various word vectors

web.stanford.edu/class/cs224n/materials/Gensim%20word%20vector%20visualization.html

 

Gensim word vector visualization

For looking at word vectors, I'll use Gensim. We also use it in hw1 for word vectors. Gensim isn't really a deep learning package. It's a package for for word and text similarity modeling, which started with (LDA-style) topic models and grew into SVD and n

web.stanford.edu

import numpy as np

# Get the interactive Tools for Matplotlib
%matplotlib notebook
import matplotlib.pyplot as plt
plt.style.use('ggplot')

from sklearn.decomposition import PCA

from gensim.test.utils import datapath, get_tmpfile
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
 
glove_file = datapath('/Users/manning/Corpora/GloVe/glove.6B.100d.txt')
word2vec_glove_file = get_tmpfile("glove.6B.100d.word2vec.txt")
glove2word2vec(glove_file, word2vec_glove_file)
model = KeyedVectors.load_word2vec_format(word2vec_glove_file)
model.most_similar('obama')
model.most_similar('banana')
model.most_similar(negative='banana')
result = model.most_similar(positive=['woman', 'king'], negative=['man'])
print("{}: {:.4f}".format(*result[0]))
def analogy(x1, x2, y1):
    result = model.most_similar(positive=[y1, x2], negative=[x1])
    return result[0][0]
Analogy

analogy('japan', 'japanese', 'australia')
analogy('australia', 'beer', 'france')
analogy('obama', 'clinton', 'reagan')
analogy('tall', 'tallest', 'long')
analogy('good', 'fantastic', 'bad')
print(model.doesnt_match("breakfast cereal dinner lunch".split()))
 
def display_pca_scatterplot(model, words=None, sample=0):
    if words == None:
        if sample > 0:
            words = np.random.choice(list(model.vocab.keys()), sample)
        else:
            words = [ word for word in model.vocab ]
        
    word_vectors = np.array([model[w] for w in words])

    twodim = PCA().fit_transform(word_vectors)[:,:2]
    
    plt.figure(figsize=(6,6))
    plt.scatter(twodim[:,0], twodim[:,1], edgecolors='k', c='r')
    for word, (x,y) in zip(words, twodim):
        plt.text(x+0.05, y+0.05, word)
 
display_pca_scatterplot(model, 
                        ['coffee', 'tea', 'beer', 'wine', 'brandy', 'rum', 'champagne', 'water',
                         'spaghetti', 'borscht', 'hamburger', 'pizza', 'falafel', 'sushi', 'meatballs',
                         'dog', 'horse', 'cat', 'monkey', 'parrot', 'koala', 'lizard',
                         'frog', 'toad', 'monkey', 'ape', 'kangaroo', 'wombat', 'wolf',
                         'france', 'germany', 'hungary', 'luxembourg', 'australia', 'fiji', 'china',
                         'homework', 'assignment', 'problem', 'exam', 'test', 'class',
                         'school', 'college', 'university', 'institute'])
display_pca_scatterplot(model, sample=300)
반응형
반응형

Stanford Pos Tagger를 이용한 POS Tagging

from nltk.tag import StanfordPOSTagger
from nltk.tokenize import word_tokenize

STANFORD_POS_MODEL_PATH = "압축 푼 디렉토리/stanford-postagger-full-2018-02-27/models/english-bidirectional-distsim.tagger"
STANFORD_POS_JAR_PATH = "압축 푼 디렉토리/stanford-postagger-full-2018-02-27/stanford-postagger-3.9.1.jar"

pos_tagger = StanfordPOSTagger(STANFORD_POS_MODEL_PATH, STANFORD_POS_JAR_PATH)

text = """Facebook CEO Mark Zuckerberg acknowledged a range of mistakes on Wednesday, 
including allowing most of its two billion users to have their public profile data scraped by outsiders. 
However, even as he took responsibility, he maintained he was the best person to fix the problems he created."""

tokens = word_tokenize(text)
print(tokens)
print()
print(pos_tagger.tag(tokens))

['Facebook', 'CEO', 'Mark', 'Zuckerberg', 'acknowledged', 'a', 'range', 'of', 'mistakes', 'on', 'Wednesday', ',', 'including', 'allowing', 'most', 'of', 'its', 'two', 'billion', 'users', 'to', 'have', 'their', 'public', 'profile', 'data', 'scraped', 'by', 'outsiders', '.', 'However', ',', 'even', 'as', 'he', 'took', 'responsibility', ',', 'he', 'maintained', 'he', 'was', 'the', 'best', 'person', 'to', 'fix', 'the', 'problems', 'he', 'created', '.']

[('Facebook', 'NNP'), ('CEO', 'NNP'), ('Mark', 'NNP'), ('Zuckerberg', 'NNP'), ('acknowledged', 'VBD'), ('a', 'DT'), ('range', 'NN'), ('of', 'IN'), ('mistakes', 'NNS'), ('on', 'IN'), ('Wednesday', 'NNP'), (',', ','), ('including', 'VBG'), ('allowing', 'VBG'), ('most', 'JJS'), ('of', 'IN'), ('its', 'PRP$'), ('two', 'CD'), ('billion', 'CD'), ('users', 'NNS'), ('to', 'TO'), ('have', 'VB'), ('their', 'PRP$'), ('public', 'JJ'), ('profile', 'NN'), ('data', 'NNS'), ('scraped', 'VBN'), ('by', 'IN'), ('outsiders', 'NNS'), ('.', '.'), ('However', 'RB'), (',', ','), ('even', 'RB'), ('as', 'IN'), ('he', 'PRP'), ('took', 'VBD'), ('responsibility', 'NN'), (',', ','), ('he', 'PRP'), ('maintained', 'VBD'), ('he', 'PRP'), ('was', 'VBD'), ('the', 'DT'), ('best', 'JJS'), ('person', 'NN'), ('to', 'TO'), ('fix', 'VB'), ('the', 'DT'), ('problems', 'NNS'), ('he', 'PRP'), ('created', 'VBD'), ('.', '.')]

noun_and_verbs = []
for token in pos_tagger.tag(tokens):
    if token[1].startswith("V") or token[1].startswith("N"):
        noun_and_verbs.append(token[0])
print(', '.join(noun_and_verbs))

Facebook, CEO, Mark, Zuckerberg, acknowledged, range, mistakes, Wednesday, including, allowing, users, have, profile, data, scraped, outsiders, took, responsibility, maintained, was, person, fix, problems, created

novdov.github.io/nlp/2018/04/05/NLP-POS-Tagging-%ED%92%88%EC%82%AC-%ED%83%9C%EA%B9%85/

 

Stanford Pos Tagger를 이용한 POS Tagging

Stanford Pos Tagger를 이용해 POS tagging 방법을 간단하게 알아봅니다.

novdov.github.io

품사 태깅 약어 정보

www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html

 

Penn Treebank P.O.S. Tags

31. VBP Verb, non-3rd person singular present

www.ling.upenn.edu

Number

Tag

Description

1. CC Coordinating conjunction
2. CD Cardinal number
3. DT Determiner
4. EX Existential there
5. FW Foreign word
6. IN Preposition or subordinating conjunction
7. JJ Adjective
8. JJR Adjective, comparative
9. JJS Adjective, superlative
10. LS List item marker
11. MD Modal
12. NN Noun, singular or mass
13. NNS Noun, plural
14. NNP Proper noun, singular
15. NNPS Proper noun, plural
16. PDT Predeterminer
17. POS Possessive ending
18. PRP Personal pronoun
19. PRP$ Possessive pronoun
20. RB Adverb
21. RBR Adverb, comparative
22. RBS Adverb, superlative
23. RP Particle
24. SYM Symbol
25. TO to
26. UH Interjection
27. VB Verb, base form
28. VBD Verb, past tense
29. VBG Verb, gerund or present participle
30. VBN Verb, past participle
31. VBP Verb, non-3rd person singular present
32. VBZ Verb, 3rd person singular present
33. WDT Wh-determiner
34. WP Wh-pronoun
35. WP$ Possessive wh-pronoun
36. WRB Wh-adverb

반응형
반응형

Remove Elements From A Counter

 

counter에서 요소 삭제하기

Question:

How do you remove an element from a counter?

Answer:

Setting a count to zero does not remove an element from a counter. Use del to remove it entirely.

Source: (example.py)

from collections import Counter
 
c = Counter(x=10, y=7, z=3)
print(c)
 
c['z'] = 0
print(c)
 
del c['z']
print(c)

Output:

$ python example.py
Counter({'x': 10, 'y': 7, 'z': 3})
Counter({'x': 10, 'y': 7, 'z': 0})
Counter({'x': 10, 'y': 7})
반응형
반응형

How to add or increment single item of the Python Counter class

 

How to add or increment single item of the Python Counter class

A set uses .update to add multiple items, and .add to add a single one. Why doesn't collections.Counter work the same way? To increment a single Counter item using Counter.update, you have to add i...

stackoverflow.com

python 에서  Counter 증가시키명서 리스트 적용하기. 

>>> c = collections.Counter(a=23, b=-9)

#You can add a new element and set its value like this:

>>> c['d'] = 8
>>> c
Counter({'a': 23, 'd': 8, 'b': -9})


#Increment:

>>> c['d'] += 1
>>> c
Counter({'a': 23, 'd': 9, 'b': -9} 


#Note though that c['b'] = 0 does not delete:

>>> c['b'] = 0
>>> c
Counter({'a': 23, 'd': 9, 'b': 0})


#To delete use del:

>>> del c['b']
>>> c
Counter({'a': 23, 'd': 9})
Counter is a dict subclass
반응형

+ Recent posts