代码之家  ›  专栏  ›  技术社区  ›  dhanunjayavarma kasiraju

如何分割数据?

  •  -2
  • dhanunjayavarma kasiraju  · 技术社区  · 7 年前

    假设我的数据帧中有1010行。现在我想用 train_test_split 所以前1000行是训练数据,接下来10行是测试数据。

    # Natural Language Processing
    # Importing the libraries
    import numpy as np
    import matplotlib.pyplot as plt
    import pandas as pd
    
    # Importing the dataset
    dataset = pd.read_csv('Restaurant_Reviews.tsv', delimiter = '\t', quoting = 3)
    newset=pd.read_csv('Test.tsv',delimiter='\t',quoting=3)
    frames=[dataset,newset]
    res=pd.concat(frames,ignore_index=True)
    # Cleaning the texts
    import re
    import nltk
    nltk.download('stopwords')
    from nltk.corpus import stopwords
    from nltk.stem.porter import PorterStemmer
    corpus = []
    for i in range(0, 1010):
        review = re.sub('[^a-zA-Z]', ' ', res['Review'][i])
        review = review.lower()
        review = review.split()
        ps = PorterStemmer()
        review = [ps.stem(word) for word in review if not word in set(stopwords.words('english'))]
        review = ' '.join(review)
        corpus.append(review)
    from sklearn.feature_extraction.text import CountVectorizer
    cv=CountVectorizer(max_features=1500)
    #X=cv.fit_transform(corpus).toarray()
    X=corpus
    y=res.iloc[:,1].values
    
    # Splitting the dataset into the Training set and Test set
    from sklearn.cross_validation import train_test_split
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.01, random_state = 0)
    
    1 回复  |  直到 7 年前
        1
  •  0
  •   Andrey Lukyanenko    7 年前

    如果您知道需要序列中的前1000个样本和测试中的最后10个样本,那么最好手动进行,因为train\u test\u split会随机拆分。

    X_train = X[:1000]
    X_test = X[1000:]
    y_train = y[:1000]
    y_test = y[1000:]