Abstract

Incorporating style-related objectives into shape design has been centrally important to maximize product appeal. However, algorithmic style capture and reuse have not fully benefited from automated data-driven methodologies due to the challenging nature of design describability. This paper proposes an AI-driven method to fully automate the discovery of brand-related features. First, to tackle the scarcity of vectorized product images, this research proposes two data acquisition workflows: parametric modeling from small curve-based datasets, and vectorization from large pixel-based datasets. Second, this study constructs BIGNet, a two-tier Brand Identification Graph Neural Network, to learn from both scalar vector graphics’ curve-level and chunk-level parameters. In the first case study, BIGNet not only classifies phone brands but also captures brand-related features across multiple scales, such as lens’ location, as confirmed by AI evaluation. In the second study, this paper showcases the generalizability of BIGNet learning from a vectorized car image dataset and validates the consistency and robustness of its predictions given four scenarios. The results match the difference commonly observed in luxury versus economy brands in the automobile market. Finally, this paper also visualizes the activation maps generated from a convolutional neural network and shows BIGNet’s advantage of being a more explainable style-capturing agent.

References

1.
Orbay
,
G.
,
Fu
,
L.
, and
Kara
,
B.
,
2015
, “
Deciphering the Influence of Product Shape on Consumer Judgments Through Geometric Abstraction
,”
ASME J. Mech. Des.
,
137
(
8
), p.
081103
.
2.
Ersin Yumer
,
M.
,
Chaudhuri
,
S.
,
Hodgins
,
J. K.
, and
Burak Kara
,
L.
,
2015
, “
Semantic Shape Editing Using Deformation Handles
,”
ACM Trans. Graph.
,
34
(
4
), pp.
1
12
.
3.
Ravasi
,
D.
, and
Stigliani
,
I.
,
2012
, “
Product Design: A Review and Research Agenda for Management Studies
,”
Int. J. Manag. Rev.
,
14
(
4
), pp.
464
488
.
4.
Bloch
,
P. H.
,
2011
, “
Product Design and Marketing: Reflections After Fifteen Years
,”
J. Prod. Innov. Manage.
,
28
(
3
), pp.
378
380
.
5.
Liu
,
Y.
,
Li
,
K. J.
,
Chen
,
H. A.
, and
Balachander
,
S.
,
2017
, “
The Effects of Products’ Aesthetic Design on Demand and Marketing-Mix Effectiveness: The Role of Segment Prototypicality and Brand Consistency
,”
J. Mark.
,
81
(
1
), pp.
83
102
.
6.
Stiny
,
G.
,
1991
, “
The Algebras of Design
,”
Res. Eng. Des.
,
2
(
3
), pp.
171
181
.
7.
Agarwal
,
M.
, and
Cagan
,
J.
,
1997
, “
A Blend of Different Tastes—The Language of Coffeemakers
,”
Environ. Plann B Plann. Des.
,
25
(
2
), pp.
205
226
.
8.
Ang
,
M. C.
,
Ng
,
K. W.
, and
Pham
,
D. T.
,
2013
, “
Combining the Bees Algorithm and Shape Grammar to Generate Branded Product Concepts
,”
Proc. Inst. Mech. Eng. B
,
227
(
12
), pp.
1860
1873
.
9.
Chau
,
H. H.
,
Chen
,
X.
,
McKay
,
A.
, and
de Pennington
,
A.
,
2004
, “
Evaluation of a 3D Shape Grammar Implementation
,”
International Conference on Design Computing and Cognition
,
Cambridge, MA
,
July 19–21
, Springer, Netherlands, pp.
357
376
.
10.
Pugliese
,
M. J.
, and
Cagan
,
J.
,
2002
, “
Capturing a Rebel: Modeling the Harley-Davidson Brand Through a Motorcycle Shape Grammar
,”
Res. Eng. Des.
,
13
(
3
), pp.
139
156
.
11.
McCormack
,
J. P.
,
Cagan
,
J.
,
Vogel
,
C. M.
,
2004
, “
Speaking the Buick Language: Capturing, Understanding, and Exploring Brand Identity With Shape Grammars
,”
Des. Stud.
,
25
(
1
), pp.
1
29
.
12.
Aqeel
,
A. B.
,
2015
, “
Development of Visual Aspect of Porsche Brand Using CAD Technology
,”
Procedia Technol.
,
20
, pp.
170
177
.
13.
Hsiao
,
S.-W.
, and
Huang
,
H. C.
,
2002
, “
A Neural Network Based Approach for Product Form Design
,”
Des. Stud.
,
23
(
1
), pp.
67
84
.
14.
Lin
,
C.-C.
, and
Hsiao
,
S.-W.
,
2003
, “A Study on Applying Feature-Based Modeling and Neural Network to Shape Generation,” http://140.116.207.99/handle/987654321/262952.
15.
Goodfellow
,
I.
,
Bengio
,
Y.
, and
Courville
,
A.
,
2016
,
Deep Learning
,
MIT Press
,
Cambridge, MA
.
16.
Krizhevsky
,
A.
,
Sutskever
,
I.
, and
Hinton
,
G. E.
,
2017
, “
ImageNet Classification With Deep Convolutional Neural Networks
,”
Commun. ACM
,
60
(
6
), pp.
84
90
.
17.
Zhou
,
B.
,
Khosla
,
A.
,
Lapedriza
,
A.
,
Oliva
,
A.
, and
Torralba
,
A.
,
2016
, “
Learning Deep Features for Discriminative Localization
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
,
Las Vegas, NV
,
June 26–July 1
, pp.
2921
2929
.
18.
Stiny
,
G.
, and
Gips
,
J.
,
1971
, “
Shape Grammars and the Generative Specification of Painting and Sculpture
,”
IFIP Congress
,
Ljubljana, Yugoslavia
,
Aug. 23–28
, p.
128
.
19.
Stiny
,
G.
,
2006
,
Shape : Talking About Seeing and Doing
,
MIT Press
,
Cambridge, MA
.
20.
Boatwright
,
P.
,
Cagan
,
J.
,
Kapur
,
D.
, and
Saltiel
,
A.
,
2009
, “
A Step-by-Step Process to Build Valued Brands
,”
J. Prod. Brand. Manag.
,
18
(
1
), pp.
38
49
.
21.
Harris
,
C.
, and
Stephens
,
M.
,
1988
, “
A Combined Corner and Edge Detector
,”
Alvey vision conference
,
Manchester, UK
,
Aug. 31–Sept. 2
, pp. 23.1–23.6.
22.
Tmosi
,
C.
, and
Kanade
,
T.
,
1992
, “
Shape and Motion From Image Streams: A Factorization Method
,”
Int. J. Comput. Vision
,
9
(
2
), pp.
137
154
.
23.
Lowe
,
D. G.
,
2004
, “
Distinctive Image Features From Scale-Invariant Keypoints
,”
Int. J. Comput. Vision
,
60
(
2
), pp.
91
110
.
24.
Dalal
,
N.
, and
Triggs
,
B.
,
2005
, “
Histograms of Oriented Gradients for Human Detection
,”
IEEE Computer Society Conference on Computer Vision and Pattern Recognition
,
San Diego, CA
,
June 20–26
, pp.
886
893
.
25.
Calonder
,
M.
,
Lepetit
,
V.
,
Strecha
,
C.
, and
Fua
,
P.
,
2010
, “
BRIEF: Binary Robust Independent Elementary Features
,”
European Conference on Computer Vision
,
Heraklion, Crete, Greece
,
Sept. 5–11
, pp.
778
792
.
26.
Lecun
,
Y.
,
Bottou
,
L.
,
Bengio
,
Y.
, and
Haffner
,
P.
,
1998
, “
Gradient-Based Learning Applied to Document Recognition
,”
Proc. IEEE
,
86
(
11
), pp.
2278
2324
.
27.
Simonyan
,
K.
, and
Zisserman
,
A.
,
2015
, “
Very Deep Convolutional Networks for Large-Scale Image Recognition
,”
International Conference on Learning Representations
,
San Diego, CA
,
May 7–9
.
28.
Iandola
,
F. N.
,
Han
,
S.
,
Moskewicz
,
M. W.
,
Ashraf
,
K.
,
Dally
,
W. J.
, and
Keutzer
,
K.
,
2017
, “
SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and <0.5MB Model Size
,”
International Conference on Learning Representations
,
Toulon, France
,
Apr. 24–26
.
29.
Chen
,
T.
,
Kornblith
,
S.
,
Norouzi
,
M.
, and
Hinton
,
G.
,
2020
, “
A Simple Framework for Contrastive Learning of Visual Representations
,”
International conference on machine learning
,
Vienna, Austria
,
July 12–18
, pp.
1597
1607
.
30.
Chen
,
T.
,
Kornblith
,
S.
,
Swersky
,
K.
,
Norouzi
,
M.
, and
Hinton
,
G.
,
2020
, “
Big Self-Supervised Models Are Strong Semi-Supervised Learners
,”
Adv. Neural Inf. Process. Syst.
,
33
, pp.
22243
22255
.
31.
Chabot
,
F.
,
Chaouch
,
M.
,
Rabarisoa
,
J.
,
Teuliere
,
C.
, and
Chateau
,
T.
,
2017
, “
Deep Edge-Color Invariant Features for 2D/3D Car Fine-Grained Classification
,”
IEEE Intelligent Vehicles Symposium, Proceedings
,
Los Angeles, CA
,
June 11–14
, Institute ofElectrical and Electronics Engineers Inc., pp.
733
738
.
32.
Zhang
,
Q.
,
Wu
,
Y. N.
, and
Zhu
,
S.-C.
,
2018
, “
Interpretable Convolutional Neural Networks
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
,
Salt Lake City, UT
,
June 18–22
, pp.
8827
8836
.
33.
Chang
,
D.
,
Ding
,
Y.
,
Xie
,
J.
,
Bhunia
,
A. K.
,
Li
,
X.
,
Ma
,
Z.
,
Wu
,
M.
,
Guo
,
J.
, and
Song
,
Y.-Z.
,
2020
, “
The Devil Is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification
,”
IEEE Transactions on Image Processing
,
Virtual
,
Oct. 25–28
, pp.
4683
4695
.
34.
Selvaraju
,
R. R.
,
Cogswell
,
M.
,
Das
,
A.
,
Vedantam
,
R.
,
Parikh
,
D.
, and
Batra
,
D.
,
2017
, “
Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization
,”
Proceedings of the IEEE International Conference on Computer Vision
,
Honolulu, HI
,
July 21–26
, pp.
618
626
.
35.
Eitz
,
M.
,
Hays
,
J.
, and
Alexa
,
M.
,
2012
, “
How Do Humans Sketch Objects?
,”
ACM Trans. Graph.
,
31
(
4
), pp.
1
10
.
36.
Schneider
,
R. G.
, and
Tuytelaars
,
T.
,
2014
, “
Sketch Classification and Classification-Driven Analysis Using Fisher Vectors
,”
ACM Trans. Graph.
,
33
(
6
), pp.
1
9
.
37.
Li
,
Y.
,
Hospedales
,
T. M.
,
Song
,
Y.-Z.
, and
Gong
,
S.
,
2015
, “
Free-Hand Sketch Recognition by Multi-Kernel Feature Learning
,”
Comput. Vision Image Understanding
,
137
, pp.
1
11
.
38.
Yu
,
Q.
,
Yang
,
Y.
,
Song
,
Y.-Z.
,
Xiang
,
T.
, and
Hospedales
,
T. M.
,
2017
, “
Sketch-a-Net: A Deep Neural Network That Beats Humans
,”
Int. J. Comput. Vision
,
122
(
3
), pp.
411
425
.
39.
Li
,
L.
,
Zou
,
C.
,
Zheng
,
Y.
,
Su
,
Q.
,
Fu
,
H.
, and
Tai
,
C.-L.
,
2020
, “
Sketch-R2CNN: An RNN-Rasterization-CNN Architecture for Vector Sketch Recognition
,”
IEEE Trans. Visual Comput. Graphics
,
27
(
9
), pp.
3745
3754
.
40.
Hu
,
C.
,
Li
,
D.
,
Song
,
Y.-Z.
,
Xiang
,
T.
, and
Hospedales
,
T. M.
,
2018
, “
Sketch-a-Classifier: Sketch-Based Photo Classifier Generation
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
,
Salt Lake City, UT
,
June 18–22
, pp.
9136
9144
.
41.
Xu
,
P.
,
Huang
,
Y.
,
Yuan
,
T.
,
Pang
,
K.
,
Song
,
Y.-Z.
,
Xiang
,
T.
,
Hospedales
,
T. M.
,
Ma
,
Z.
, and
Guo
,
J.
,
2018
, “
SketchMate: Deep Hashing for Million-Scale Human Sketch Retrieval
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
,
Salt Lake City, UT
,
June 18–22
.
42.
Wu
,
Z.
,
Pan
,
S.
,
Chen
,
F.
,
Long
,
G.
,
Zhang
,
C.
, and
Yu
,
P. S.
,
2020
, “
A Comprehensive Survey on Graph Neural Networks
,”
IEEE Trans. Neural Netw. Learn. Syst.
,
32
(
1
), pp.
4
24
.
43.
Micheli
,
A.
,
2009
, “
Neural Network for Graphs: A Contextual Constructive Approach
,”
IEEE Trans. Neural Networks
,
20
(
3
), pp.
498
511
.
44.
Atwood
,
J.
, and
Towsley
,
D.
,
2016
, “
Diffusion-Convolutional Neural Networks
,”
Neural Information Processing Systems
,
Barcelona, Spain
,
Dec. 5–10
.
45.
Monti
,
F.
,
Boscaini
,
D.
,
Masci
,
J.
,
Rodolà
,
E.
,
Svoboda
,
J.
, and
Bronstein
,
M. M.
,
2017
, “
Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
,
Honolulu, HI
,
July 21–26
, pp.
5115
5124
.
46.
Veličković
,
P.
,
Cucurull
,
G.
,
Casanova
,
A.
,
Romero
,
A.
,
,
P.
, and
Bengio
,
Y.
,
2017
, “
Graph Attention Networks
,”
stat
,
1050
(
20
), pp.
10
48550
.
47.
Hamilton
,
W. L.
,
Ying
,
R.
, and
Leskovec
,
J.
,
2017
, “
Inductive Representation Learning on Large Graphs
,”
Neural Information Processing Systems
,
Long Beach, CA
,
Dec. 4–9
.
48.
Zhou
,
J.
,
Cui
,
G.
,
Hu
,
S.
,
Zhang
,
Z.
,
Yang
,
C.
,
Liu
,
Z.
,
Wang
,
L.
,
Li
,
C.
, and
Sun
,
M.
,
2020
, “
Graph Neural Networks: A Review of Methods and Applications
,”
AI Open
,
1
, pp.
57
81
.
49.
Sanchez-Gonzalez
,
A.
,
Heess
,
N.
,
Springenberg
,
J. T.
,
Merel
,
J.
,
Riedmiller
,
M.
,
Hadsell
,
R.
, and
Battaglia
,
P.
,
2018
, “
Graph Networks as Learnable Physics Engines for Inference and Control
,”
International Conference on Machine Learning
,
Stockholmsmässan, Stockholm, Sweden
,
July 10–15
, PMLR, pp.
4470
4479
.
50.
Do
,
K.
,
Tran
,
T.
, and
Venkatesh
,
S.
,
2019
, “
Graph Transformation Policy Network for Chemical Reaction Prediction
,”
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
,
Anchorage, AK
,
Aug. 4–8
, pp.
750
760
.
51.
Guo
,
S.
,
Lin
,
Y.
,
Feng
,
N.
,
Song
,
C.
, and
Wan
,
H.
,
2019
, “
Attention Based Spatial-Temporal Graph Convolutional Networks for Traffic Flow Forecasting
,”
AAAI
,
33
(
1
), pp.
922
929
.
52.
Xie
,
L.
,
Lu
,
Y.
,
Furuhata
,
T.
,
Yamakawa
,
S.
,
Zhang
,
W.
,
Regmi
,
A.
,
Kara
,
L.
, and
Shimada
,
K.
,
2022
, “
Graph Neural Network-Enabled Manufacturing Method Classification From Engineering Drawings
,”
Comput. Ind.
,
142
, p.
103967
.
53.
Zhang
,
W.
,
Joseph
,
J.
,
Yin
,
Y.
,
Xie
,
L.
,
Furuhata
,
T.
,
Yamakawa
,
S.
,
Shimada
,
K.
, and
Kara
,
L. B.
,
2023
, “
Component Segmentation of Engineering Drawings Using Graph Convolutional Networks
,”
Comput. Ind.
,
147
, p.
103885
.
54.
Ranscombe
,
C.
,
Hicks
,
B.
,
Mullineux
,
G.
, and
Singh
,
B.
,
2012
, “
Visually Decomposing Vehicle Images: Exploring the Influence of Different Aesthetic Features on Consumer Perception of Brand
,”
Des. Stud.
,
33
(
4
), pp.
319
341
.
55.
Friedman
,
J. H.
,
2001
, “
Greedy Function Approximation: A Gradient Boosting Machine
,”
Ann. Stat.
,
29
(
5
), pp.
1189
1232
.
56.
Akkucuk
,
U.
, and
Esmaeili
,
J.
,
2016
, “
The Impact of Brands on Consumer Buying Behavior
,”
Int. J. Acad. Res. Bus. Soc. Sci.
,
5
(
4
), pp.
1
16
.
57.
Hussain Shaheed Zulfikar Ali Bhutto
,
S.
, and
Raheem Ahmed
,
R.
,
2020
, “
Smartphone Buying Behaviors in a Framework of Brand Experience and Brand Equity
,”
Transform. Bus. Econ.
,
19
(
2
), pp.
220
242
.
58.
Selinger
,
P.
,
2003
, “Potrace: A Polygon-Based Tracing Algorithm,” http://autotrace.sourceforge.net/.
59.
Yang
,
L.
,
Luo
,
P.
,
Loy
,
C. C.
, and
Tang
,
X.
,
2015
, “
A Large-Scale Car Dataset for Fine-Grained Categorization and Verification
,”
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
,
Boston, MA
,
June 7–12
, pp.
3973
3981
.
60.
Wu
,
Y.
,
Kirillov
,
A.
,
Massa
,
F.
,
Lo
,
W.-Y.
, and
Girshick
,
R.
,
2019
, “Detectron2,” https://github.com/facebookresearch/detectron2.
61.
Pu
,
M.
,
Huang
,
Y.
,
Liu
,
Y.
,
Guan
,
Q.
, and
Ling
,
H.
,
2022
, “
EDTER: Edge Detection with Transformer
,”
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
,
New Orleans, LA
,
June 21–24
, pp.
1402
1412
.
You do not currently have access to this content.