Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowTypeError
Message:      ("Expected bytes, got a 'int' object", 'Conversion failed for column 0 with type object')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 181, in _generate_tables
                  pa_table = pa.Table.from_pandas(df, preserve_index=False)
                File "pyarrow/table.pxi", line 3874, in pyarrow.lib.Table.from_pandas
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 611, in dataframe_to_arrays
                  arrays = [convert_column(c, f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 611, in <listcomp>
                  arrays = [convert_column(c, f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 598, in convert_column
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 592, in convert_column
                  result = pa.array(col, type=type_, from_pandas=True, safe=safe)
                File "pyarrow/array.pxi", line 339, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 85, in pyarrow.lib._ndarray_to_array
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: ("Expected bytes, got a 'int' object", 'Conversion failed for column 0 with type object')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

TopicVid Dataset

This dataset provides structured metadata, content features, and a heterogeneous graph related to short-video topics and subtopics. It is designed for tasks such as topic analysis, audience interaction modeling, peak prediction, and research on graph neural networks or graph retrieval.


Contents

  • available_dataset_with_subtopic.json — Processed structured raw data of short video content and interaction statistics about topics.
  • comment.npy — Comment features.
  • content.npy — Content features.
  • desc.npy — Description features.
  • heterogeneous_graph.pkl — Heterogeneous graph file.
  • title.npy — Title features.
  • topic.npy — Topic embeddings.
  • video.npy — Video features.

Data Structure

1) available_dataset_with_subtopic.json

This file contains the raw data of short video content and interaction statistics.

Fields:

  • url (string) — Direct link to the video on the platform.
  • desc (string) — Description text of the video content.
  • title (string) — Title of the video post.
  • content (string) — Additional text content; may be empty.
  • user_id (string) — Unique identifier of the publishing user.
  • duration (integer) — Video duration in seconds.
  • platform (string) — Source platform name (e.g., Douyin, Kuaishou).
  • post_create_time (string) — Time of publication in "YYYY-MM-DD HH:MM:SS" format.
  • topic (string) — Main topic associated with the video.
  • subtopic (string) — Numbered subcategory under the main topic.
  • time_frames (dict) — Interaction statistics recorded at different dates.
    • Key: Date in "YYYY-MM-DD" format
    • Value: Dictionary with fields:
      • fans_count — Number of followers
      • like_count — Number of likes
      • view_count — Number of views
      • share_count — Number of shares
      • collect_count — Number of collections
      • comment_count — Number of comments
  • comments (dict) — Collection of user comments.
    • Key: Comment index (string)
    • Value: Dictionary with fields:
      • comment_user_id — Commenting user ID
      • comment_nickname — Commenting user's display name
      • comment_content — Comment text
      • comment_time — Time of comment
      • ip_address — IP location of the commenting user

2) *.npy

Numpy arrays containing preprocessed embeddings or feature vectors.

3) heterogeneous_graph.pkl

A serialized Python object containing:

  • Node types and indices
  • Edge types and lists
  • Labels information is available at link
Downloads last month
5