hasUnsafeContent method
Check if any text in the list is unsafe
Implementation
Future<bool> hasUnsafeContent(List<String> texts, {String? model}) async {
final results = await moderateTexts(texts, model: model);
return results.any((result) => result.flagged);
}