Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
query
stringlengths
9
3.4k
document
stringlengths
9
87.4k
metadata
dict
negatives
sequencelengths
4
101
negative_scores
sequencelengths
4
101
document_score
stringlengths
3
10
document_rank
stringclasses
102 values
Read stop words from input file (filename) and insert each word as a key into the stop words hash table.
def load_stop_table(self, filename): self.stop_table = HashTable(191) try: a = open(filename, "r") lines = a.readlines() a.close() except: raise FileNotFoundError() for n in range(len(lines)): self.stop_table.insert(lines[n][:-1], n)
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
[ "def load_stop_table(self, filename):\n self.stop_table = HashTable(191)\n with open(filename, 'r') as f:\n for word in f.readlines():\n self.stop_table.insert(word.replace('\\n',''),None)", "def load_stop_words(stop_word_file):\n stop_words = []\n for line in open(st...
[ "0.7675772", "0.6984068", "0.67051345", "0.65770996", "0.6555971", "0.65203714", "0.6489752", "0.6475122", "0.6467671", "0.6451879", "0.6377066", "0.63083595", "0.63079983", "0.6283576", "0.6268957", "0.6258872", "0.6253571", "0.6214078", "0.61646724", "0.6143819", "0.608474"...
0.69579995
2
Read words from input text file (filename) and insert them into the concordance hash table, after processing for punctuation, numbers and filtering out words that are in the stop words hash table. Do not include duplicate line numbers (word appearing on same line more than once, just one entry for that line)
def load_concordance_table(self, filename): self.concordance_table = HashTable(191) try: a = open(filename, "r") lines = a.readlines() a.close() except: raise FileNotFoundError() for n in range(len(lines)): lone = clean(lines[n]) line = lone.split(" ") for i in line: if (i != None) and (self.stop_table.in_table(i) == False) and (i != ""): self.concordance_table.insert(i, n+1)
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
[ "def read_text_file(self, filepath: str):\n with open(filepath) as fh:\n for line in fh:\n for word in re.split('\\W+', line):\n word = word.lower()\n if len(word):\n l = self.hash_map.lookup(word)\n ...
[ "0.70385915", "0.6925145", "0.6899971", "0.6768178", "0.6651934", "0.6420912", "0.6312901", "0.6296871", "0.6231026", "0.6214866", "0.62069726", "0.6178269", "0.616501", "0.615068", "0.61372495", "0.6134684", "0.612722", "0.61159056", "0.6110232", "0.6060298", "0.6052872", ...
0.65844196
5
Write the concordance entries to the output file(filename) See sample output files for format.
def write_concordance(self, filename): all_keys = self.concordance_table.get_all_keys() lines = [] for i in all_keys: a = "" a += i + ":" f = self.concordance_table.get_value(i) if f != None: for s in f: a += " " + str(s) a += "\n" lines.append(a) a = open(filename, "w+") for i in lines: a.write(i) a.close()
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
[ "def write_concordance(self, filename):\n out = ''\n values = [x for x in self.concordance_table.hash_table if x is not None]\n values.sort(key=lambda x: x[0])\n for v in values:\n out += f'{v[0]}: {\" \".join(str(x) for x in sorted(set(v[1])))}\\n' \n with open(filenam...
[ "0.7794726", "0.66742295", "0.64932483", "0.64526165", "0.6379942", "0.63655496", "0.63634735", "0.62910575", "0.6240714", "0.6233921", "0.6233921", "0.6233921", "0.61785156", "0.61412483", "0.61257005", "0.610843", "0.6082861", "0.60720426", "0.6064205", "0.60603034", "0.598...
0.7876976
0
"Builds a kfactor circulant matrix (A matrix with the structure of circulant matrices, but with the (...TRUNCATED)
"def factor_circulant_matrix(x, k):\n n=len(x)\n return circulant(x) * (tri(n,n, 0) + k*np.tra(...TRUNCATED)
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
["def make_k_matrix(self):\r\n K = self.uv_vol + self.Epsilon * self.guv_vol + \\\r\n (...TRUNCATED)
["0.6495986","0.6089255","0.6045119","0.59890914","0.5949488","0.59035623","0.5859298","0.58462423",(...TRUNCATED)
0.78092545
0
"Compute the matrixvector product y = Cu where C is a kfactor circulant matrix All matrices are real(...TRUNCATED)
"def factor_circulant_multiplication(u, x, k=1):\n n = len(u) \n D_k = (k**(1/n))**np.arang(...TRUNCATED)
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
["def updateC(A, U, B):\n \n m_dim = A.shape[1] \n q_dim = B.shape[0]\n \n C_tenso(...TRUNCATED)
["0.6325033","0.6273725","0.6251581","0.62479377","0.6177961","0.6087597","0.6022537","0.60215706","(...TRUNCATED)
0.693636
0
Compute the matrixvector product y = Cu where C is a circulant matrix All matrices are real
def circulant_multiplication(u, a): return real(ifft(fft(a)*fft(u)))
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
["def covar(fx,cx):\n \n fx = np.array(fx)\n cx = np.array(cx)\n \n shape_fx = fx.sha(...TRUNCATED)
["0.650418","0.650212","0.6441079","0.6313763","0.6310517","0.62949276","0.62782884","0.62631303","0(...TRUNCATED)
0.6389226
3
Compute the matrixvector product y = Tu where T is a Toeplitz matrix All matrices are real
"def toeplitz_multiplication(u, c, r=None):\n n = len(u)\n if r is None:\n r = c\n u(...TRUNCATED)
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
["def matrix_vector_prod(m,u):\n each_product = []\n for v in m:\n each_product.append((...TRUNCATED)
["0.7003199","0.6513981","0.64759356","0.6454179","0.6377554","0.6326698","0.6245358","0.620894","0.(...TRUNCATED)
0.63380134
5
"Solves Tx=b using the Levinson algorithm where T is apositivedefinite symmetric Toeplitz matrix b i(...TRUNCATED)
"def levinson(r, b):\n\n n = len(b)\n y = zeros((n,))\n x = zeros((n,))\n \n # normal(...TRUNCATED)
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
["def _tridisolve(d, e, b, overwrite_b=True):\n\t\tN = len(b)\n\t\t# work vectors\n\t\tdw = d.copy()(...TRUNCATED)
["0.63466734","0.61827254","0.61033237","0.6093494","0.60769826","0.5885008","0.58844715","0.5877297(...TRUNCATED)
0.7257071
0
"Compute the log determinant of a positivedefinite symmetric toeplitz matrix. The determinant is com(...TRUNCATED)
"def toeplitz_slogdet(r):\n n = len(r)\n r_0 = r[0]\n \n r = np.concatenate((r, np.array(...TRUNCATED)
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
["def fast_logdet(matrix):\n sign, ld = np.linalg.slogdet(matrix)\n if not sign > 0:\n (...TRUNCATED)
["0.7205463","0.69225436","0.6803772","0.6577487","0.65662503","0.6258033","0.6235449","0.6192166","(...TRUNCATED)
0.6977162
1
Preprocessing needed for toeplitz_inverse_multiplication()
"def toeplitz_inverse_multiplication_prep(T_column):\n \n phi=1\n psi=2\n assert phi != (...TRUNCATED)
{ "objective": { "self": [], "paired": [], "triplet": [ [ "query", "document", "negatives" ] ] } }
["def bd_toeplitz_inverse_multiplication_prep(*arrs):\n \n t = []\n for c in arrs: # loop o(...TRUNCATED)
["0.65743506","0.63173485","0.60780877","0.60345995","0.5920918","0.5710167","0.5684219","0.56176597(...TRUNCATED)
0.65871215
0
End of preview. Expand in Data Studio

CoRNStack Python Dataset

The CoRNStack Dataset, accepted to ICLR 2025, is a large-scale high quality training dataset specifically for code retrieval across multiple programming languages. This dataset comprises of <query, positive, negative> triplets used to train nomic-embed-code, CodeRankEmbed, and CodeRankLLM.

CoRNStack Dataset Curation

Starting with the deduplicated Stackv2, we create text-code pairs from function docstrings and respective code. We filtered out low-quality pairs where the docstring wasn't English, too short, or that contained URLs, HTML tags, or invalid characters. We additionally kept docstrings with text lengths of 256 tokens or longer to help the model learn long-range dependencies.

image/png

After the initial filtering, we used dual-consistency filtering to remove potentially noisy examples. We embed each docstring and code pair and compute the similarity between each docstring and every code example. We remove pairs from the dataset if the corresponding code example is not found in the top-2 most similar examples for a given docstring.

During training, we employ a novel curriculum-based hard negative mining strategy to ensure the model learns from challenging examples. We use a softmax-based sampling strategy to progressively sample hard negatives with increasing difficulty over time.

Join the Nomic Community

Citation

If you find the model, dataset, or training code useful, please cite our work:

@misc{suresh2025cornstackhighqualitycontrastivedata,
      title={CoRNStack: High-Quality Contrastive Data for Better Code Retrieval and Reranking}, 
      author={Tarun Suresh and Revanth Gangi Reddy and Yifei Xu and Zach Nussbaum and Andriy Mulyar and Brandon Duderstadt and Heng Ji},
      year={2025},
      eprint={2412.01007},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.01007}, 
}
Downloads last month
2,783

Models trained or fine-tuned on nomic-ai/cornstack-python-v1

Collection including nomic-ai/cornstack-python-v1

Paper for nomic-ai/cornstack-python-v1