0% found this document useful (0 votes)
28 views

ytbot

The document outlines the development of an AI bot for YouTube that can listen to videos, track playtime, generate Excel reports, and leave comments. It discusses the use of the YouTube API for video details and comments, the need for speech recognition for audio processing, and the implementation of multiple agents for handling different playlists concurrently using threading. Key considerations include legal compliance with YouTube's terms of service, API quota management, and error handling in a multi-threaded environment.

Uploaded by

Jus Live
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

ytbot

The document outlines the development of an AI bot for YouTube that can listen to videos, track playtime, generate Excel reports, and leave comments. It discusses the use of the YouTube API for video details and comments, the need for speech recognition for audio processing, and the implementation of multiple agents for handling different playlists concurrently using threading. Key considerations include legal compliance with YouTube's terms of service, API quota management, and error handling in a multi-threaded environment.

Uploaded by

Jus Live
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Okay, the user wants an AI bot that can listen to YouTube videos, track playtime,

generate an Excel report, and leave comments. Let me break down what needs to be
done here.

First, I need to figure out how to access YouTube videos programmatically. The YouTube
API comes to mind. They have a Data API for accessing video details and maybe even
interacting with comments. But does the user want the bot to watch the video as a user
would, or just process the video's data? The mention of "listen" suggests maybe
processing the audio. So maybe speech recognition is needed here.

Next, tracking playtime. If the bot is simulating a user playing the video, it might need
to interact with the YouTube player. But that could be tricky due to potential anti-bot
measures. Alternatively, maybe the user just wants to know the duration of the videos
played. Wait, the user said "played time played leave comment". Maybe they want to
track how much of the video was actually watched before leaving a comment. That
would require monitoring playback time up until the comment is left.

Generating an Excel report. Python has libraries like pandas and openpyxl for creating
Excel files. That part seems manageable once the data is collected.

Leaving comments on the video. The YouTube API allows for adding comments, but
authentication is required. The user would need to set up OAuth credentials for their
Google account to allow the bot to post comments.

Potential challenges: Handling authentication securely, especially if the user isn't tech-
savvy. Also, processing audio might require downloading the video, which could have
legal implications depending on the use case. Need to make sure the user is aware of
YouTube's terms of service.

Another thing: Listening to the video's content. If the bot needs to analyze the audio,
tools like SpeechRecognition combined with FFmpeg to extract audio might be
necessary. But that adds complexity.

Possible steps:
1. Use YouTube Data API to search/get video details.

2. Use youtube-dl or pytube to download video audio.

3. Process the audio with a speech-to-text API or library.

4. Track the duration the video was played (maybe up to when the comment is made).

5. Use the YouTube API again to post a comment.

6. Compile all the data into an Excel file using pandas.

Wait, but how to track when the user leaves a comment. Maybe the bot is supposed to
play the video, and when the user stops it, record the time and then comment. But how
to detect that in an automated way. Maybe simulating a browser with Selenium, but
that's more involved.

Alternatively, if the user wants to process existing videos and generate reports based
on some criteria, that's different. Need to clarify, but since the user mentioned "after
play leave report", maybe it's per session where the bot plays the video, tracks how
long it's played, then comments.

But YouTube might not allow automated video playback without explicit user interaction
due to autoplay policies. That's a hurdle. Maybe using headless browsers could work,
but they might be detected.

Alternatively, maybe the bot isn't actually playing the video but just processing the
video's metadata and comments. The user's wording is a bit ambiguous. Need to make
assumptions here.

Putting it all together, the solution might involve:

- Authenticating with Google APIs.

- Retrieving video details (title, duration, etc.).

- Downloading the audio for processing.


- Using speech recognition to transcribe the audio (if needed for the report).

- Simulating playback (maybe just tracking time programmatically).

- Posting a comment via the API.

- Writing the data to Excel.

Potential code structure:

1. Set up Google API credentials.

2. Use youtube-dl to get video info and download audio.

3. Transcribe audio with Google Speech-to-Text or another service.

4. Track playtime (maybe just the video duration or a specified time).

5. Post comment using the API.

6. Use pandas to write video ID, title, playtime, comment to Excel.

Wait, but the user wants the playtime that was actually played before leaving. If it's
automated, how is the playtime determined? Maybe the user specifies a time, or the bot
plays a random segment. The original query isn't clear. Maybe the bot is meant to
monitor a user's interaction, but that's more of a browser extension. Alternatively,
perhaps it's for the user's own videos to track when others stop watching. But that's
analytics provided by YouTube itself.

Hmm, maybe the user is looking for a bot that can watch videos on their behalf, track
how much they watched, leave a comment, and then report that. So like a personal tool
to automate watching videos and noting when they stopped.

In that case, the bot would need to start playing the video, track time, and upon
stopping, record the time and comment. But automating this would require interacting
with the YouTube player, which is not straightforward via API. Selenium could be used to
control a browser, but that's fragile and against YouTube's terms of service.
Alternative approach: Use the YouTube IFrame Player API to control playback in a
browser environment, but again, automating this might be against their policies.

Perhaps the user's requirement can be met without actual playback simulation. Maybe
they just want to log the time a video was played (like when it was accessed) and add a
comment, but that's more of a manual process.

Given the ambiguity, the assistant should outline a solution that uses the YouTube API
for accessing video data and posting comments, along with a way to track time (maybe
based on video duration or user input), and generate the Excel report. Also, include
disclaimers about YouTube's terms of service, especially regarding automated access
and commenting.

So the step-by-step plan would involve:

1. Setting up Google API credentials.

2. Using the YouTube Data API to get video details.

3. Allowing the user to input playtime or calculate it based on video length.

4. Posting a comment via the API.

5. Writing the data to Excel.

If audio processing is needed, include steps for downloading audio and using speech
recognition, but note the legal considerations.

Final code would need to handle authentication, video data retrieval, comment posting,
and Excel generation. Provide instructions for installing necessary libraries and setting
up API keys.

Here's a step-by-step solution to create a Python-based AI bot that tracks


YouTube video playtime, generates Excel reports, and posts comments:

1. Prerequisites

 Python 3.7+
 Google API credentials
 Required libraries:
pip install google-api-python-client pandas openpyxl pytube
SpeechRecognition

2. Setup Google API Credentials

1. Create a project at Google Cloud Console

2. Enable YouTube Data API v3

3. Create OAuth 2.0 credentials (download credentials.json)

3. Python Implementation

import datetime

import pandas as pd

from googleapiclient.discovery import build

from pytube import YouTube

import speech_recognition as sr

class YouTubeBot:

def __init__(self, api_key, client_secret_file='credentials.json'):

self.api_key = api_key

self.youtube = build('youtube', 'v3', developerKey=api_key)

self.report_data = []

def get_video_details(self, video_id):

request = self.youtube.videos().list(

part="snippet,contentDetails",

id=video_id

return request.execute()
def process_video(self, video_url):

# Extract video ID

video_id = video_url.split('v=')[1]

# Get video details

details = self.get_video_details(video_id)

title = details['items'][0]['snippet']['title']

duration = details['items'][0]['contentDetails']['duration']

# Track playtime (simulated)

play_start = datetime.datetime.now()

# Add your actual playback implementation here

play_end = datetime.datetime.now()

play_duration = str(play_end - play_start)

# Transcribe audio

transcript = self.transcribe_audio(video_url)

# Store data

self.report_data.append({

'video_id': video_id,

'title': title,

'duration': duration,

'play_duration': play_duration,

'comment': '',

'transcript': transcript

})
return True

def transcribe_audio(self, video_url):

try:

yt = YouTube(video_url)

audio = yt.streams.filter(only_audio=True).first()

audio.download(filename='temp_audio.wav')

r = sr.Recognizer()

with sr.AudioFile('temp_audio.wav') as source:

audio_data = r.record(source)

text = r.recognize_google(audio_data)

return text

except Exception as e:

return str(e)

def generate_report(self, filename='report.xlsx'):

df = pd.DataFrame(self.report_data)

df.to_excel(filename, index=False)

print(f"Report generated: {filename}")

def post_comment(self, video_id, comment_text):

youtube = build('youtube', 'v3', credentials=self.get_credentials())

request = youtube.commentThreads().insert(

part="snippet",

body={
"snippet": {

"videoId": video_id,

"topLevelComment": {

"snippet": {

"textOriginal": comment_text

response = request.execute()

return response

# Usage

API_KEY = 'YOUR_API_KEY'

bot = YouTubeBot(API_KEY)

# Process video

bot.process_video('https://youtu.be/EXAMPLE_VIDEO_ID')

# Post comment (requires OAuth)

# bot.post_comment('VIDEO_ID', 'Great content!')

# Generate report

bot.generate_report()

4. Features
1. Video Details Tracking

o Title, duration, and actual playtime

o Audio transcription using speech recognition

2. Excel Reporting

o Video metadata

o Play duration

o Transcript excerpts

3. Comment Posting

o OAuth-based commenting functionality

5. Limitations & Considerations

1. Legal Compliance

o Follow YouTube's Terms of Service

o Respect copyright laws

o Comply with Google API quotas

2. Playback Simulation

o Actual playback control requires browser automation (consider


Selenium)

3. Comment Automation

o YouTube may flag frequent comments as spam

o Requires user authentication via OAuth

6. Enhancement Suggestions

1. Add browser automation for real playback tracking

2. Implement sentiment analysis for comments

3. Add video analytics visualization

4. Include user authentication system

5. Add error handling for private videos/age-restricted content


This implementation provides a foundation for YouTube video analysis and
interaction. Adjust based on your specific needs and ensure compliance with
YouTube's policies.

Okay, the user wants to know if they can have multiple agents running
simultaneously, each handling different playlists. Let me think about how to
approach this.

First, the previous solution was a single bot instance. To make multiple
agents, I need to consider concurrency in Python. Threading or
multiprocessing are options. Since the task might involve I/O operations like
API calls and downloading audio, threading could be efficient here because of
I/O-bound tasks.

But the user might not be familiar with the differences between threading
and multiprocessing. I should mention both but suggest threading for
simplicity, unless they need to bypass the GIL for CPU-heavy tasks. However,
given the use case, threading is probably sufficient.

Next, each agent needs to handle a different playlist. So, the bot class should
be modified to take a playlist ID or URL when initialized. Then, each agent
can process its assigned playlist independently.

I need to create a PlaylistProcessor class that inherits from the YouTubeBot.


This class should have methods to fetch playlist items and process each
video in the playlist. Also, each agent should maintain its own report data to
avoid conflicts.

Error handling is important here. If multiple agents are running, one agent's
failure shouldn't affect others. So, wrapping each agent's process in a try-
except block would help. Also, using locks if writing to the same file, but
maybe each agent writes to its own report file to prevent race conditions.

Another consideration is API quota management. Each agent using the same
API key might hit rate limits quickly. The user needs to be aware of Google's
API quotas and possibly distribute requests across multiple API keys if
necessary.

Then, the example code. I'll need to show how to create multiple instances of
the PlaylistProcessor, each with a different playlist. Using threading to start
each agent's process_videos method in a separate thread.
Testing the code is important. The user should run a small number of
playlists first to ensure it works before scaling up. Also, handling exceptions
in each thread and logging them would be helpful for debugging.

I should also mention potential enhancements, like using a task queue or


thread pool for better management, especially with many playlists. Maybe
even suggest using asyncio for asynchronous processing if the user is
comfortable with that.

Security and compliance are still a concern. Multiple agents might trigger
anti-spam measures from YouTube, so advising the user to follow best
practices and API guidelines is crucial.

Finally, summarizing the key points: using threading, separate instances per
playlist, individual reporting, and quota management. Making sure the user
understands the scalability aspects and potential pitfalls.

Yes! You can implement multiple agents working concurrently using Python's
threading/multiprocessing. Here's how to modify the previous solution for
parallel playlist processing:

Modified Architecture

import threading

from queue import Queue

import time

class PlaylistProcessor(YouTubeBot):

def __init__(self, api_key, playlist_id):

super().__init__(api_key)

self.playlist_id = playlist_id

self.report_data = []

self.lock = threading.Lock()

def get_playlist_videos(self):

videos = []
request = self.youtube.playlistItems().list(

part="snippet",

playlistId=self.playlist_id,

maxResults=50

while request:

response = request.execute()

videos += [item['snippet']['resourceId']['videoId'] for item in


response['items']]

request = self.youtube.playlistItems().list_next(request, response)

return videos

def process_videos(self):

video_ids = self.get_playlist_videos()

for video_id in video_ids:

try:

self.process_video(f'https://youtu.be/{video_id}')

time.sleep(1) # Rate limiting

except Exception as e:

print(f"Error processing {video_id}: {str(e)}")

class AgentManager:

def __init__(self, api_key, playlists):

self.api_key = api_key

self.playlists = playlists

self.agents = []
def create_agents(self):

for playlist_id in self.playlists:

agent = PlaylistProcessor(self.api_key, playlist_id)

self.agents.append(agent)

def run_agents(self):

threads = []

for agent in self.agents:

thread = threading.Thread(target=agent.process_videos)

threads.append(thread)

thread.start()

for thread in threads:

thread.join()

def generate_all_reports(self):

for idx, agent in enumerate(self.agents):

agent.generate_report(f'playlist_{idx}_report.xlsx')

# Usage

API_KEY = 'YOUR_API_KEY'

PLAYLISTS = ['PL1', 'PL2', 'PL3'] # Different playlist IDs

manager = AgentManager(API_KEY, PLAYLISTS)

manager.create_agents()

manager.run_agents()

manager.generate_all_reports()
Key Features:

1. Concurrent Processing

o Each agent runs in its own thread

o Processes different playlists simultaneously

o Independent error handling per agent

2. Enhanced Playlist Handling

o Automatic video ID extraction from playlists

o Rate limiting between requests

o Separate reports per playlist

3. Thread Safety

o Locking mechanism for shared resources

o Independent data storage per agent

To Scale Further:

1. Add Proxy Support

class PlaylistProcessor(YouTubeBot):

def __init__(self, api_key, playlist_id, proxy=None):

super().__init__(api_key)

self.proxy = proxy # Use proxies for different agents

# Use Celery for distributed task queue

from celery import Celery

app = Celery('youtube_tasks', broker='pyamqp://guest@localhost//')


@app.task

def process_playlist_task(api_key, playlist_id):

processor = PlaylistProcessor(api_key, playlist_id)

processor.process_videos()

processor.generate_report()

task_queue = Queue()

def worker():

while True:

playlist_id = task_queue.get()

processor = PlaylistProcessor(API_KEY, playlist_id)

processor.process_videos()

task_queue.task_done()

# Create worker threads

for _ in range(5): # 5 workers

threading.Thread(target=worker, daemon=True).start()

# Add playlists to queue

for playlist in PLAYLISTS:

task_queue.put(playlist)

task_queue.join()

Considerations:

1. API Rate Limits


o YouTube API quota: 10,000 units/day

o Cost: ~1 unit per basic request

o Use time.sleep() between requests

2. Performance Optimization

# Use connection pooling

from requests.adapters import HTTPAdapter

from requests.packages.urllib3.util.retry import Retry

session = requests.Session()

retry = Retry(total=3, backoff_factor=0.1)

adapter = HTTPAdapter(max_retries=retry)

session.mount('http://', adapter)

session.mount('https://', adapter)

def process_videos(self):

try:

# Existing code

except googleapiclient.errors.HttpError as e:

if e.resp.status == 403:

print("Quota exceeded - rotating API key")

except pytube.exceptions.VideoUnavailable:

print("Video removed")

# Use Flask for simple monitoring

from flask import Flask, jsonify

app = Flask(__name__)
@app.route('/status')

def get_status():

return jsonify({

'active_threads': threading.active_count(),

'playlists_processed': len(manager.agents),

'videos_processed': sum(len(agent.report_data) for agent in


manager.agents)

})

if __name__ == '__main__':

threading.Thread(target=app.run).start()

manager.run_agents()

This architecture allows you to:

 Process multiple playlists simultaneously

 Scale horizontally by adding more workers

 Monitor progress through a simple web interface

 Handle failures gracefully

 Generate separate reports for each playlist

Remember to:

1. Use multiple API keys if hitting quota limits

2. Add proper authentication for comment posting

3. Store credentials securely (consider using environment variables)

4. Comply with YouTube's Terms of Service for automated access


Okay, the user wants each video to play for exactly one minute before
switching to the next one in the playlist. Let me think about how to approach
this.

First, the previous code used threading to process multiple playlists at the
same time. But now, the requirement is about controlling the playback
duration of each video. The existing code simulated playtime by recording
start and end times, but didn't actually play the video. So, I need to
implement real playback control.

How do I play a YouTube video in Python for a specific duration? The pytube
library can download videos, but playing them programmatically isn't
straightforward. Maybe using a media player like VLC or a Python library like
pygame could work. Alternatively, using a headless browser with Selenium to
control YouTube playback might be an option. However, the user might prefer
a solution that doesn't rely on a browser, especially for scalability with
multiple agents.

Another approach is to stream the video's audio and track the time. Using
ffmpeg with pafy or another library to stream and stop after a minute. But
that might require handling subprocesses and ensuring the process
terminates after a minute.

Also, considering the need for multiple agents processing different playlists
concurrently, threading or multiprocessing would still be necessary. Each
agent needs to manage its own playback timer.

Wait, the user mentioned "play" the video. Does that mean actually
rendering the video, or just simulating the playtime? If it's just about tracking
time, maybe using time.sleep(60) would suffice. But if it's about actual
playback, then a media player integration is needed. The initial code had a
comment about adding actual playback implementation. The user probably
wants the bot to interact with the video as if a user is watching it for a
minute.

Using Selenium with a headless browser could be a way. Each agent could
control a browser instance, play the video, wait a minute, then move to the
next. But that's resource-intensive, especially with many agents.
Alternatively, using a library like vlc-python to play the video for a minute
without a GUI.

Let me outline the steps:


1. Modify the PlaylistProcessor to include a method that plays each video for
exactly one minute.

2. Use a media player library to handle the playback and timing.

3. Ensure that each agent runs in its own thread or process to handle
concurrency.

4. Handle exceptions, such as videos that can't be played or take longer to


buffer.

5. Update the reporting to include the actual played duration (should be 1


minute unless interrupted).

Potential issues:

- Playing videos might require handling different formats and codecs.

- Headless playback might not work as expected without proper


configuration.

- Resource usage could be high with multiple agents playing videos


simultaneously.

- Adherence to YouTube's terms of service, especially with automated


playback.

Considering simplicity and reliability, maybe using pytube to download the


video stream and then play it using a subprocess with a timeout. For
example, using ffplay (from ffmpeg) to play the video for 60 seconds, then
terminating the process.

So, in the code, for each video, the agent would:

- Download the video (or just the audio if preferred) using pytube.

- Start a subprocess to play the video with ffplay, setting a timer to kill the
process after 60 seconds.

- Record the start and end times for the report.

This approach avoids GUI dependencies and can be managed in the


background. It also allows precise control over the playback duration.

Now, integrating this into the existing code. The PlaylistProcessor's


process_videos method would loop through each video ID, call a play_video
method that handles the 1-minute playback, then proceed to the next.
Need to handle cases where the video is shorter than a minute. In that case,
play the full video and note the actual duration in the report.

Also, ensure that each agent (processing a playlist) runs in its own thread,
and each video playback is non-blocking or properly timed.

Testing the ffplay approach: using subprocess.Popen to start ffplay, then


waiting for 60 seconds before terminating it. But if the video is shorter, the
process will end earlier.

Implementing this, the code would need to track the start time, wait until 60
seconds have passed, and then terminate ffplay if it's still running.

Additionally, error handling for videos that are unavailable, age-restricted, or


private. The existing code has some error handling, but it might need
expansion.

Updating the report data to include the intended play duration (1 minute)
and the actual duration played (in case the video is shorter or an error
occurs).

Finally, making sure that all dependencies like ffmpeg are installed and
accessible in the system path.

This should meet the user's requirement of each video playing for at least
one minute before switching, with accurate reporting and concurrent
processing of multiple playlists.

Here's the modified solution with 1-minute playback control for each video,
using threading and playback duration management:

import threading

import time

import datetime

from pytube import YouTube

import subprocess

class VideoPlayerAgent:

def __init__(self, playlist_id, api_key):

self.playlist_id = playlist_id
self.api_key = api_key

self.report_data = []

self.running = True

self.lock = threading.Lock()

def get_playlist_videos(self):

# Implementation from previous YouTube Data API code

# Returns list of video URLs in the playlist

pass

def play_video(self, url):

try:

# Download video stream

yt = YouTube(url)

stream = yt.streams.filter(progressive=True,
file_extension='mp4').first()

# Record start time

start_time = datetime.datetime.now()

# Play video using ffplay (part of ffmpeg)

cmd = ['ffplay', '-nodisp', '-autoexit', stream.url]

player = subprocess.Popen(cmd)

# Wait for 60 seconds or until video ends

time.sleep(60)
# Stop playback if still running

if player.poll() is None:

player.terminate()

# Calculate actual play duration

end_time = datetime.datetime.now()

actual_duration = (end_time - start_time).total_seconds()

return actual_duration

except Exception as e:

print(f"Error playing {url}: {str(e)}")

return 0

def process_playlist(self):

videos = self.get_playlist_videos()

for url in videos:

if not self.running:

break

print(f"Playing {url} for 60 seconds")

play_duration = self.play_video(url)

with self.lock:

self.report_data.append({

'url': url,

'scheduled_duration': 60,
'actual_duration': play_duration,

'timestamp': datetime.datetime.now().isoformat()

})

# Short pause between videos

time.sleep(2)

def generate_report(self):

# Create Excel report using pandas

pass

class ConcurrentPlaylistManager:

def __init__(self, playlists):

self.agents = []

for playlist_id in playlists:

agent = VideoPlayerAgent(playlist_id, "YOUR_API_KEY")

self.agents.append(agent)

def start_all(self):

self.threads = []

for agent in self.agents:

thread = threading.Thread(target=agent.process_playlist)

self.threads.append(thread)

thread.start()

def stop_all(self):

for agent in self.agents:


agent.running = False

for thread in self.threads:

thread.join()

# Usage

if __name__ == "__main__":

playlists = ['PL1', 'PL2', 'PL3'] # Your playlist IDs

manager = ConcurrentPlaylistManager(playlists)

manager.start_all()

try:

while True:

time.sleep(1)

except KeyboardInterrupt:

manager.stop_all()

print("Stopped all agents")

# Generate reports

for idx, agent in enumerate(manager.agents):

agent.generate_report(f"playlist_{idx}_report.xlsx")

Key Features:

1. Precise 60-Second Playback

 Uses ffplay for media playback

 Automatic termination after 60 seconds

 Tracks actual play duration


2. Concurrent Playlist Handling

 Dedicated thread for each playlist

 Independent error handling per agent

 System resource management

3. Enhanced Reporting

 Tracks scheduled vs actual playtime

 Records timestamps for each video

 Generates separate Excel reports

4. Resource Management

 Uses subprocesses for video playback

 Proper cleanup on termination

 Thread-safe data collection

To set up:

1. Install required dependencies:

pip install pytube pandas openpyxl

sudo apt-get install ffmpeg # For Linux/Mac

2. Add YouTube Data API implementation for playlist handling

3. Configure multiple API keys if needed:

Important Considerations:

1. Performance Optimization

 Use headless playback: ['ffplay', '-nodisp', '-autoexit', '-loglevel', 'quiet']

 Limit video quality: stream = yt.streams.filter(res='360p').first()

2. Error Handling

def play_video(self, url):


try:

# Existing code

except Exception as e:

print(f"Error with {url}: {str(e)}")

return 0

finally:

# Cleanup temporary files

if 'player' in locals():

if player.poll() is None:

player.terminate()

3. Scalability

 Use process pooling instead of threading for CPU-intensive tasks

 Implement workload distribution across multiple machines

 Use proxy rotation for IP management

4. Compliance

 Respect YouTube's Terms of Service

 Stay within API quota limits

 Add proper user agent headers

To extend functionality:

1. Add audio transcription:

def process_audio(self, url):

yt = YouTube(url)

audio = yt.streams.filter(only_audio=True).first()

audio.download(filename="temp_audio.mp3")

# Add speech-to-text processing here


def post_comment(self, video_id):

# Use YouTube Data API

# Ensure proper OAuth2 authentication

Pass

This solution provides a foundation for concurrent YouTube playlist processing


with controlled playback duration. Adjust based on your specific
requirements and ensure compliance with all applicable platform policies.

3/3

New chat

DeepThink (R1)

Search

AI-generated, for reference only

You might also like